InterFace: A software package for face image warping, averaging, and principal components analysis.
Kramer, Robin S S; Jenkins, Rob; Burton, A Mike
2017-12-01
We describe InterFace, a software package for research in face recognition. The package supports image warping, reshaping, averaging of multiple face images, and morphing between faces. It also supports principal components analysis (PCA) of face images, along with tools for exploring the "face space" produced by PCA. The package uses a simple graphical user interface, allowing users to perform these sophisticated image manipulations without any need for programming knowledge. The program is available for download in the form of an app, which requires that users also have access to the (freely available) MATLAB Runtime environment.
SIMA: Python software for analysis of dynamic fluorescence imaging data.
Kaifosh, Patrick; Zaremba, Jeffrey D; Danielson, Nathan B; Losonczy, Attila
2014-01-01
Fluorescence imaging is a powerful method for monitoring dynamic signals in the nervous system. However, analysis of dynamic fluorescence imaging data remains burdensome, in part due to the shortage of available software tools. To address this need, we have developed SIMA, an open source Python package that facilitates common analysis tasks related to fluorescence imaging. Functionality of this package includes correction of motion artifacts occurring during in vivo imaging with laser-scanning microscopy, segmentation of imaged fields into regions of interest (ROIs), and extraction of signals from the segmented ROIs. We have also developed a graphical user interface (GUI) for manual editing of the automatically segmented ROIs and automated registration of ROIs across multiple imaging datasets. This software has been designed with flexibility in mind to allow for future extension with different analysis methods and potential integration with other packages. Software, documentation, and source code for the SIMA package and ROI Buddy GUI are freely available at http://www.losonczylab.org/sima/.
Image analysis to evaluate the browning degree of banana (Musa spp.) peel.
Cho, Jeong-Seok; Lee, Hyeon-Jeong; Park, Jung-Hoon; Sung, Jun-Hyung; Choi, Ji-Young; Moon, Kwang-Deog
2016-03-01
Image analysis was applied to examine banana peel browning. The banana samples were divided into 3 treatment groups: no treatment and normal packaging (Cont); CO2 gas exchange packaging (CO); normal packaging with an ethylene generator (ET). We confirmed that the browning of banana peels developed more quickly in the CO group than the other groups based on sensory test and enzyme assay. The G (green) and CIE L(∗), a(∗), and b(∗) values obtained from the image analysis sharply increased or decreased in the CO group. And these colour values showed high correlation coefficients (>0.9) with the sensory test results. CIE L(∗)a(∗)b(∗) values using a colorimeter also showed high correlation coefficients but comparatively lower than those of image analysis. Based on this analysis, browning of the banana occurred more quickly for CO2 gas exchange packaging, and image analysis can be used to evaluate the browning of banana peels. Copyright © 2015 Elsevier Ltd. All rights reserved.
On the release of cppxfel for processing X-ray free-electron laser images.
Ginn, Helen Mary; Evans, Gwyndaf; Sauter, Nicholas K; Stuart, David Ian
2016-06-01
As serial femtosecond crystallography expands towards a variety of delivery methods, including chip-based methods, and smaller collected data sets, the requirement to optimize the data analysis to produce maximum structure quality is becoming increasingly pressing. Here cppxfel , a software package primarily written in C++, which showcases several data analysis techniques, is released. This software package presently indexes images using DIALS (diffraction integration for advanced light sources) and performs an initial orientation matrix refinement, followed by post-refinement of individual images against a reference data set. Cppxfel is released with the hope that the unique and useful elements of this package can be repurposed for existing software packages. However, as released, it produces high-quality crystal structures and is therefore likely to be also useful to experienced users of X-ray free-electron laser (XFEL) software who wish to maximize the information extracted from a limited number of XFEL images.
On the release of cppxfel for processing X-ray free-electron laser images
Ginn, Helen Mary; Evans, Gwyndaf; Sauter, Nicholas K.; ...
2016-05-11
As serial femtosecond crystallography expands towards a variety of delivery methods, including chip-based methods, and smaller collected data sets, the requirement to optimize the data analysis to produce maximum structure quality is becoming increasingly pressing. Herecppxfel, a software package primarily written in C++, which showcases several data analysis techniques, is released. This software package presently indexes images using DIALS (diffraction integration for advanced light sources) and performs an initial orientation matrix refinement, followed by post-refinement of individual images against a reference data set.Cppxfelis released with the hope that the unique and useful elements of this package can be repurposed formore » existing software packages. However, as released, it produces high-quality crystal structures and is therefore likely to be also useful to experienced users of X-ray free-electron laser (XFEL) software who wish to maximize the information extracted from a limited number of XFEL images.« less
NASA Technical Reports Server (NTRS)
1980-01-01
MATHPAC image-analysis library is collection of general-purpose mathematical and statistical routines and special-purpose data-analysis and pattern-recognition routines for image analysis. MATHPAC library consists of Linear Algebra, Optimization, Statistical-Summary, Densities and Distribution, Regression, and Statistical-Test packages.
PCIPS 2.0: Powerful multiprofile image processing implemented on PCs
NASA Technical Reports Server (NTRS)
Smirnov, O. M.; Piskunov, N. E.
1992-01-01
Over the years, the processing power of personal computers has steadily increased. Now, 386- and 486-based PC's are fast enough for many image processing applications, and inexpensive enough even for amateur astronomers. PCIPS is an image processing system based on these platforms that was designed to satisfy a broad range of data analysis needs, while requiring minimum hardware and providing maximum expandability. It will run (albeit at a slow pace) even on a 80286 with 640K memory, but will take full advantage of bigger memory and faster CPU's. Because the actual image processing is performed by external modules, the system can be easily upgraded by the user for all sorts of scientific data analysis. PCIPS supports large format lD and 2D images in any numeric type from 8-bit integer to 64-bit floating point. The images can be displayed, overlaid, printed and any part of the data examined via an intuitive graphical user interface that employs buttons, pop-up menus, and a mouse. PCIPS automatically converts images between different types and sizes to satisfy the requirements of various applications. PCIPS features an API that lets users develop custom applications in C or FORTRAN. While doing so, a programmer can concentrate on the actual data processing, because PCIPS assumes responsibility for accessing images and interacting with the user. This also ensures that all applications, even custom ones, have a consistent and user-friendly interface. The API is compatible with factory programming, a metaphor for constructing image processing procedures that will be implemented in future versions of the system. Several application packages were created under PCIPS. The basic package includes elementary arithmetics and statistics, geometric transformations and import/export in various formats (FITS, binary, ASCII, and GIF). The CCD processing package and the spectral analysis package were successfully used to reduce spectra from the Nordic Telescope at La Palma. A photometry package is also available, and other packages are being developed. A multitasking version of PCIPS that utilizes the factory programming concept is currently under development. This version will remain compatible (on the source code level) with existing application packages and custom applications.
SuperSegger: robust image segmentation, analysis and lineage tracking of bacterial cells.
Stylianidou, Stella; Brennan, Connor; Nissen, Silas B; Kuwada, Nathan J; Wiggins, Paul A
2016-11-01
Many quantitative cell biology questions require fast yet reliable automated image segmentation to identify and link cells from frame-to-frame, and characterize the cell morphology and fluorescence. We present SuperSegger, an automated MATLAB-based image processing package well-suited to quantitative analysis of high-throughput live-cell fluorescence microscopy of bacterial cells. SuperSegger incorporates machine-learning algorithms to optimize cellular boundaries and automated error resolution to reliably link cells from frame-to-frame. Unlike existing packages, it can reliably segment microcolonies with many cells, facilitating the analysis of cell-cycle dynamics in bacteria as well as cell-contact mediated phenomena. This package has a range of built-in capabilities for characterizing bacterial cells, including the identification of cell division events, mother, daughter and neighbouring cells, and computing statistics on cellular fluorescence, the location and intensity of fluorescent foci. SuperSegger provides a variety of postprocessing data visualization tools for single cell and population level analysis, such as histograms, kymographs, frame mosaics, movies and consensus images. Finally, we demonstrate the power of the package by analyzing lag phase growth with single cell resolution. © 2016 John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Vu, Duc; Sandor, Michael; Agarwal, Shri
2005-01-01
CSAM Metrology Software Tool (CMeST) is a computer program for analysis of false-color CSAM images of plastic-encapsulated microcircuits. (CSAM signifies C-mode scanning acoustic microscopy.) The colors in the images indicate areas of delamination within the plastic packages. Heretofore, the images have been interpreted by human examiners. Hence, interpretations have not been entirely consistent and objective. CMeST processes the color information in image-data files to detect areas of delamination without incurring inconsistencies of subjective judgement. CMeST can be used to create a database of baseline images of packages acquired at given times for comparison with images of the same packages acquired at later times. Any area within an image can be selected for analysis, which can include examination of different delamination types by location. CMeST can also be used to perform statistical analyses of image data. Results of analyses are available in a spreadsheet format for further processing. The results can be exported to any data-base-processing software.
Paintdakhi, Ahmad; Parry, Bradley; Campos, Manuel; Irnov, Irnov; Elf, Johan; Surovtsev, Ivan; Jacobs-Wagner, Christine
2016-01-01
Summary With the realization that bacteria display phenotypic variability among cells and exhibit complex subcellular organization critical for cellular function and behavior, microscopy has re-emerged as a primary tool in bacterial research during the last decade. However, the bottleneck in today’s single-cell studies is quantitative image analysis of cells and fluorescent signals. Here, we address current limitations through the development of Oufti, a stand-alone, open-source software package for automated measurements of microbial cells and fluorescence signals from microscopy images. Oufti provides computational solutions for tracking touching cells in confluent samples, handles various cell morphologies, offers algorithms for quantitative analysis of both diffraction and non-diffraction-limited fluorescence signals, and is scalable for high-throughput analysis of massive datasets, all with subpixel precision. All functionalities are integrated in a single package. The graphical user interface, which includes interactive modules for segmentation, image analysis, and post-processing analysis, makes the software broadly accessible to users irrespective of their computational skills. PMID:26538279
Wang, Anliang; Yan, Xiaolong; Wei, Zhijun
2018-04-27
This note presents the design of a scalable software package named ImagePy for analysing biological images. Our contribution is concentrated on facilitating extensibility and interoperability of the software through decoupling the data model from the user interface. Especially with assistance from the Python ecosystem, this software framework makes modern computer algorithms easier to be applied in bioimage analysis. ImagePy is free and open source software, with documentation and code available at https://github.com/Image-Py/imagepy under the BSD license. It has been tested on the Windows, Mac and Linux operating systems. wzjdlut@dlut.edu.cn or yxdragon@imagepy.org.
Large space telescope, phase A. Volume 4: Scientific instrument package
NASA Technical Reports Server (NTRS)
1972-01-01
The design and characteristics of the scientific instrument package for the Large Space Telescope are discussed. The subjects include: (1) general scientific objectives, (2) package system analysis, (3) scientific instrumentation, (4) imaging photoelectric sensors, (5) environmental considerations, and (6) reliability and maintainability.
Chen, Hui; van Eijnatten, Maureen; Wolff, Jan; de Lange, Jan; van der Stelt, Paul F; Lobbezoo, Frank; Aarab, Ghizlane
2017-08-01
The aim of this study was to assess the reliability and accuracy of three different imaging software packages for three-dimensional analysis of the upper airway using CBCT images. To assess the reliability of the software packages, 15 NewTom 5G ® (QR Systems, Verona, Italy) CBCT data sets were randomly and retrospectively selected. Two observers measured the volume, minimum cross-sectional area and the length of the upper airway using Amira ® (Visage Imaging Inc., Carlsbad, CA), 3Diagnosys ® (3diemme, Cantu, Italy) and OnDemand3D ® (CyberMed, Seoul, Republic of Korea) software packages. The intra- and inter-observer reliability of the upper airway measurements were determined using intraclass correlation coefficients and Bland & Altman agreement tests. To assess the accuracy of the software packages, one NewTom 5G ® CBCT data set was used to print a three-dimensional anthropomorphic phantom with known dimensions to be used as the "gold standard". This phantom was subsequently scanned using a NewTom 5G ® scanner. Based on the CBCT data set of the phantom, one observer measured the volume, minimum cross-sectional area, and length of the upper airway using Amira ® , 3Diagnosys ® , and OnDemand3D ® , and compared these measurements with the gold standard. The intra- and inter-observer reliability of the measurements of the upper airway using the different software packages were excellent (intraclass correlation coefficient ≥0.75). There was excellent agreement between all three software packages in volume, minimum cross-sectional area and length measurements. All software packages underestimated the upper airway volume by -8.8% to -12.3%, the minimum cross-sectional area by -6.2% to -14.6%, and the length by -1.6% to -2.9%. All three software packages offered reliable volume, minimum cross-sectional area and length measurements of the upper airway. The length measurements of the upper airway were the most accurate results in all software packages. All software packages underestimated the upper airway dimensions of the anthropomorphic phantom.
MOSAIC: Software for creating mosaics from collections of images
NASA Technical Reports Server (NTRS)
Varosi, F.; Gezari, D. Y.
1992-01-01
We have developed a powerful, versatile image processing and analysis software package called MOSAIC, designed specifically for the manipulation of digital astronomical image data obtained with (but not limited to) two-dimensional array detectors. The software package is implemented using the Interactive Data Language (IDL), and incorporates new methods for processing, calibration, analysis, and visualization of astronomical image data, stressing effective methods for the creation of mosaic images from collections of individual exposures, while at the same time preserving the photometric integrity of the original data. Since IDL is available on many computers, the MOSAIC software runs on most UNIX and VAX workstations with the X-Windows or Sun View graphics interface.
Design and validation of Segment--freely available software for cardiovascular image analysis.
Heiberg, Einar; Sjögren, Jane; Ugander, Martin; Carlsson, Marcus; Engblom, Henrik; Arheden, Håkan
2010-01-11
Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http://segment.heiberg.se. Segment is a well-validated comprehensive software package for cardiovascular image analysis. It is freely available for research purposes provided that relevant original research publications related to the software are cited.
IQM: An Extensible and Portable Open Source Application for Image and Signal Analysis in Java
Kainz, Philipp; Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut
2015-01-01
Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM’s image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis. PMID:25612319
IQM: an extensible and portable open source application for image and signal analysis in Java.
Kainz, Philipp; Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut
2015-01-01
Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM's image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis.
iScreen: Image-Based High-Content RNAi Screening Analysis Tools.
Zhong, Rui; Dong, Xiaonan; Levine, Beth; Xie, Yang; Xiao, Guanghua
2015-09-01
High-throughput RNA interference (RNAi) screening has opened up a path to investigating functional genomics in a genome-wide pattern. However, such studies are often restricted to assays that have a single readout format. Recently, advanced image technologies have been coupled with high-throughput RNAi screening to develop high-content screening, in which one or more cell image(s), instead of a single readout, were generated from each well. This image-based high-content screening technology has led to genome-wide functional annotation in a wider spectrum of biological research studies, as well as in drug and target discovery, so that complex cellular phenotypes can be measured in a multiparametric format. Despite these advances, data analysis and visualization tools are still largely lacking for these types of experiments. Therefore, we developed iScreen (image-Based High-content RNAi Screening Analysis Tool), an R package for the statistical modeling and visualization of image-based high-content RNAi screening. Two case studies were used to demonstrate the capability and efficiency of the iScreen package. iScreen is available for download on CRAN (http://cran.cnr.berkeley.edu/web/packages/iScreen/index.html). The user manual is also available as a supplementary document. © 2014 Society for Laboratory Automation and Screening.
Mesoscale and severe storms (Mass) data management and analysis system
NASA Technical Reports Server (NTRS)
Hickey, J. S.; Karitani, S.; Dickerson, M.
1984-01-01
Progress on the Mesoscale and Severe Storms (MASS) data management and analysis system is described. An interactive atmospheric data base management software package to convert four types of data (Sounding, Single Level, Grid, Image) into standard random access formats is implemented and integrated with the MASS AVE80 Series general purpose plotting and graphics display data analysis software package. An interactive analysis and display graphics software package (AVE80) to analyze large volumes of conventional and satellite derived meteorological data is enhanced to provide imaging/color graphics display utilizing color video hardware integrated into the MASS computer system. Local and remote smart-terminal capability is provided by installing APPLE III computer systems within individual scientist offices and integrated with the MASS system, thus providing color video display, graphics, and characters display of the four data types.
Bodzon-Kulakowska, Anna; Marszalek-Grabska, Marta; Antolak, Anna; Drabik, Anna; Kotlinska, Jolanta H; Suder, Piotr
Data analysis from mass spectrometry imaging (MSI) imaging experiments is a very complex task. Most of the software packages devoted to this purpose are designed by the mass spectrometer manufacturers and, thus, are not freely available. Laboratories developing their own MS-imaging sources usually do not have access to the commercial software, and they must rely on the freely available programs. The most recognized ones are BioMap, developed by Novartis under Interactive Data Language (IDL), and Datacube, developed by the Dutch Foundation for Fundamental Research of Matter (FOM-Amolf). These two systems were used here for the analysis of images received from rat brain tissues subjected to morphine influence and their capabilities were compared in terms of ease of use and the quality of obtained results.
NASA Astrophysics Data System (ADS)
Lestari Widaningrum, Dyah
2014-03-01
This research aims to investigate the importance of take-out food packaging attributes, using conjoint analysis and QFD approach among consumers of take-out food products in Jakarta, Indonesia. The conjoint results indicate that perception about packaging material (such as paper, plastic, and polystyrene foam) plays the most important role overall in consumer perception. The clustering results that there is strong segmentation in which take-out food packaging material consumer consider most important. Some consumers are mostly oriented toward the colour of packaging, while another segment of customers concerns on packaging shape and packaging information. Segmentation variables based on packaging response can provide very useful information to maximize image of products through the package's impact. The results of House of Quality development described that Conjoint Analysis - QFD is a useful combination of the two methodologies in product development, market segmentation, and the trade off between customers' requirements in the early stages of HOQ process
Mass decomposition of galaxies using DECA software package
NASA Astrophysics Data System (ADS)
Mosenkov, A. V.
2014-01-01
The new DECA software package, which is designed to perform photometric analysis of the images of disk and elliptical galaxies having a regular structure, is presented. DECA is written in Python interpreted language and combines the capabilities of several widely used packages for astronomical data processing such as IRAF, SExtractor, and the GALFIT code used to perform two-dimensional decomposition of galaxy images into several photometric components (bulge+disk). DECA has the advantage that it can be applied to large samples of galaxies with different orientations with respect to the line of sight (including edge-on galaxies) and requires minimum human intervention. Examples of using the package to study a sample of simulated galaxy images and a sample of real objects are shown to demonstrate that DECA can be a reliable tool for the study of the structure of galaxies.
MOPEX: a software package for astronomical image processing and visualization
NASA Astrophysics Data System (ADS)
Makovoz, David; Roby, Trey; Khan, Iffat; Booth, Hartley
2006-06-01
We present MOPEX - a software package for astronomical image processing and display. The package is a combination of command-line driven image processing software written in C/C++ with a Java-based GUI. The main image processing capabilities include creating mosaic images, image registration, background matching, point source extraction, as well as a number of minor image processing tasks. The combination of the image processing and display capabilities allows for much more intuitive and efficient way of performing image processing. The GUI allows for the control over the image processing and display to be closely intertwined. Parameter setting, validation, and specific processing options are entered by the user through a set of intuitive dialog boxes. Visualization feeds back into further processing by providing a prompt feedback of the processing results. The GUI also allows for further analysis by accessing and displaying data from existing image and catalog servers using a virtual observatory approach. Even though originally designed for the Spitzer Space Telescope mission, a lot of functionalities are of general usefulness and can be used for working with existing astronomical data and for new missions. The software used in the package has undergone intensive testing and benefited greatly from effective software reuse. The visualization part has been used for observation planning for both the Spitzer and Herschel Space Telescopes as part the tool Spot. The visualization capabilities of Spot have been enhanced and integrated with the image processing functionality of the command-line driven MOPEX. The image processing software is used in the Spitzer automated pipeline processing, which has been in operation for nearly 3 years. The image processing capabilities have also been tested in off-line processing by numerous astronomers at various institutions around the world. The package is multi-platform and includes automatic update capabilities. The software package has been developed by a small group of software developers and scientists at the Spitzer Science Center. It is available for distribution at the Spitzer Science Center web page.
Hyperspectral imaging for differentiation of foreign materials from pinto beans
NASA Astrophysics Data System (ADS)
Mehrubeoglu, Mehrube; Zemlan, Michael; Henry, Sam
2015-09-01
Food safety and quality in packaged products are paramount in the food processing industry. To ensure that packaged products are free of foreign materials, such as debris and pests, unwanted materials mixed with the targeted products must be detected before packaging. A portable hyperspectral imaging system in the visible-to-NIR range has been used to acquire hyperspectral data cubes from pinto beans that have been mixed with foreign matter. Bands and band ratios have been identified as effective features to develop a classification scheme for detection of foreign materials in pinto beans. A support vector machine has been implemented with a quadratic kernel to separate pinto beans and background (Class 1) from all other materials (Class 2) in each scene. After creating a binary classification map for the scene, further analysis of these binary images allows separation of false positives from true positives for proper removal action during packaging.
POLYSITE - An interactive package for the selection and refinement of Landsat image training sites
NASA Technical Reports Server (NTRS)
Mack, Marilyn J. P.
1986-01-01
A versatile multifunction package, POLYSITE, developed for Goddard's Land Analysis System, is described which simplifies the process of interactively selecting and correcting the sites used to study Landsat TM and MSS images. Image switching between the zoomed and nonzoomed image, color and shape cursor change and location display, and bit plane erase or color change, are global functions which are active at all times. Local functions possibly include manipulation of intensive study areas, new site definition, mensuration, and new image copying. The program is illustrated with the example of a full TM maser scene of metropolitan Washington, DC.
Using Cell-ID 1.4 with R for Microscope-Based Cytometry
Bush, Alan; Chernomoretz, Ariel; Yu, Richard; Gordon, Andrew
2012-01-01
This unit describes a method for quantifying various cellular features (e.g., volume, total and subcellular fluorescence localization) from sets of microscope images of individual cells. It includes procedures for tracking cells over time. One purposefully defocused transmission image (sometimes referred to as bright-field or BF) is acquired to segment the image and locate each cell. Fluorescent images (one for each of the color channels to be analyzed) are then acquired by conventional wide-field epifluorescence or confocal microscopy. This method uses the image processing capabilities of Cell-ID (Gordon et al., 2007, as updated here) and data analysis by the statistical programming framework R (R-Development-Team, 2008), which we have supplemented with a package of routines for analyzing Cell-ID output. Both Cell-ID and the analysis package are open-source. PMID:23026908
Classification of Korla fragrant pears using NIR hyperspectral imaging analysis
NASA Astrophysics Data System (ADS)
Rao, Xiuqin; Yang, Chun-Chieh; Ying, Yibin; Kim, Moon S.; Chao, Kuanglin
2012-05-01
Korla fragrant pears are small oval pears characterized by light green skin, crisp texture, and a pleasant perfume for which they are named. Anatomically, the calyx of a fragrant pear may be either persistent or deciduous; the deciduouscalyx fruits are considered more desirable due to taste and texture attributes. Chinese packaging standards require that packed cases of fragrant pears contain 5% or less of the persistent-calyx type. Near-infrared hyperspectral imaging was investigated as a potential means for automated sorting of pears according to calyx type. Hyperspectral images spanning the 992-1681 nm region were acquired using an EMCCD-based laboratory line-scan imaging system. Analysis of the hyperspectral images was performed to select wavebands useful for identifying persistent-calyx fruits and for identifying deciduous-calyx fruits. Based on the selected wavebands, an image-processing algorithm was developed that targets automated classification of Korla fragrant pears into the two categories for packaging purposes.
Rastgou, Fereydoon; Shojaeifard, Maryam; Amin, Ahmad; Ghaedian, Tahereh; Firoozabadi, Hasan; Malek, Hadi; Yaghoobi, Nahid; Bitarafan-Rajabi, Ahmad; Haghjoo, Majid; Amouzadeh, Hedieh; Barati, Hossein
2014-12-01
Recently, the phase analysis of gated single-photon emission computed tomography (SPECT) myocardial perfusion imaging (MPI) has become feasible via several software packages for the evaluation of left ventricular mechanical dyssynchrony. We compared two quantitative software packages, quantitative gated SPECT (QGS) and Emory cardiac toolbox (ECTb), with tissue Doppler imaging (TDI) as the conventional method for the evaluation of left ventricular mechanical dyssynchrony. Thirty-one patients with severe heart failure (ejection fraction ≤35%) and regular heart rhythm, who referred for gated-SPECT MPI, were enrolled. TDI was performed within 3 days after MPI. Dyssynchrony parameters derived from gated-SPECT MPI were analyzed by QGS and ECTb and were compared with the Yu index and septal-lateral wall delay measured by TDI. QGS and ECTb showed a good correlation for assessment of phase histogram bandwidth (PHB) and phase standard deviation (PSD) (r = 0.664 and r = 0.731, P < .001, respectively). However, the mean value of PHB and PSD by ECTb was significantly higher than that of QGS. No significant correlation was found between ECTb and QGS and the Yu index. Nevertheless, PHB, PSD, and entropy derived from QGS revealed a significant (r = 0.424, r = 0.478, r = 0.543, respectively; P < .02) correlation with septal-lateral wall delay. Despite a good correlation between QGS and ECTb software packages, different normal cut-off values of PSD and PHB should be defined for each software package. There was only a modest correlation between phase analysis of gated-SPECT MPI and TDI data, especially in the population of heart failure patients with both narrow and wide QRS complex.
A high-level 3D visualization API for Java and ImageJ.
Schmid, Benjamin; Schindelin, Johannes; Cardona, Albert; Longair, Mark; Heisenberg, Martin
2010-05-21
Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.
Vasconcelos, Taruska Ventorini; Neves, Frederico Sampaio; Moraes, Lívia Almeida Bueno; Freitas, Deborah Queiroz
2015-01-01
This article aimed at comparing the accuracy of linear measurement tools of different commercial software packages. Eight fully edentulous dry mandibles were selected for this study. Incisor, canine, premolar, first molar and second molar regions were selected. Cone beam computed tomography (CBCT) images were obtained with i-CAT Next Generation. Linear bone measurements were performed by one observer on the cross-sectional images using three different software packages: XoranCat®, OnDemand3D® and KDIS3D®, all able to assess DICOM images. In addition, 25% of the sample was reevaluated for the purpose of reproducibility. The mandibles were sectioned to obtain the gold standard for each region. Intraclass coefficients (ICC) were calculated to examine the agreement between the two periods of evaluation; the one-way analysis of variance performed with the post-hoc Dunnett test was used to compare each of the software-derived measurements with the gold standard. The ICC values were excellent for all software packages. The least difference between the software-derived measurements and the gold standard was obtained with the OnDemand3D and KDIS3D (-0.11 and -0.14 mm, respectively), and the greatest, with the XoranCAT (+0.25 mm). However, there was no statistical significant difference between the measurements obtained with the different software packages and the gold standard (p> 0.05). In conclusion, linear bone measurements were not influenced by the software package used to reconstruct the image from CBCT DICOM data.
VoxelStats: A MATLAB Package for Multi-Modal Voxel-Wise Brain Image Analysis.
Mathotaarachchi, Sulantha; Wang, Seqian; Shin, Monica; Pascoal, Tharick A; Benedet, Andrea L; Kang, Min Su; Beaudry, Thomas; Fonov, Vladimir S; Gauthier, Serge; Labbe, Aurélie; Rosa-Neto, Pedro
2016-01-01
In healthy individuals, behavioral outcomes are highly associated with the variability on brain regional structure or neurochemical phenotypes. Similarly, in the context of neurodegenerative conditions, neuroimaging reveals that cognitive decline is linked to the magnitude of atrophy, neurochemical declines, or concentrations of abnormal protein aggregates across brain regions. However, modeling the effects of multiple regional abnormalities as determinants of cognitive decline at the voxel level remains largely unexplored by multimodal imaging research, given the high computational cost of estimating regression models for every single voxel from various imaging modalities. VoxelStats is a voxel-wise computational framework to overcome these computational limitations and to perform statistical operations on multiple scalar variables and imaging modalities at the voxel level. VoxelStats package has been developed in Matlab(®) and supports imaging formats such as Nifti-1, ANALYZE, and MINC v2. Prebuilt functions in VoxelStats enable the user to perform voxel-wise general and generalized linear models and mixed effect models with multiple volumetric covariates. Importantly, VoxelStats can recognize scalar values or image volumes as response variables and can accommodate volumetric statistical covariates as well as their interaction effects with other variables. Furthermore, this package includes built-in functionality to perform voxel-wise receiver operating characteristic analysis and paired and unpaired group contrast analysis. Validation of VoxelStats was conducted by comparing the linear regression functionality with existing toolboxes such as glim_image and RMINC. The validation results were identical to existing methods and the additional functionality was demonstrated by generating feature case assessments (t-statistics, odds ratio, and true positive rate maps). In summary, VoxelStats expands the current methods for multimodal imaging analysis by allowing the estimation of advanced regional association metrics at the voxel level.
Localizer: fast, accurate, open-source, and modular software package for superresolution microscopy
Duwé, Sam; Neely, Robert K.; Zhang, Jin
2012-01-01
Abstract. We present Localizer, a freely available and open source software package that implements the computational data processing inherent to several types of superresolution fluorescence imaging, such as localization (PALM/STORM/GSDIM) and fluctuation imaging (SOFI/pcSOFI). Localizer delivers high accuracy and performance and comes with a fully featured and easy-to-use graphical user interface but is also designed to be integrated in higher-level analysis environments. Due to its modular design, Localizer can be readily extended with new algorithms as they become available, while maintaining the same interface and performance. We provide front-ends for running Localizer from Igor Pro, Matlab, or as a stand-alone program. We show that Localizer performs favorably when compared with two existing superresolution packages, and to our knowledge is the only freely available implementation of SOFI/pcSOFI microscopy. By dramatically improving the analysis performance and ensuring the easy addition of current and future enhancements, Localizer strongly improves the usability of superresolution imaging in a variety of biomedical studies. PMID:23208219
NASA Astrophysics Data System (ADS)
Latief, F. D. E.; Mohammad, I. H.; Rarasati, A. D.
2017-11-01
Digital imaging of a concrete sample using high resolution tomographic imaging by means of X-Ray Micro Computed Tomography (μ-CT) has been conducted to assess the characteristic of the sample’s structure. A standard procedure of image acquisition, reconstruction, image processing of the method using a particular scanning device i.e., the Bruker SkyScan 1173 High Energy Micro-CT are elaborated. A qualitative and a quantitative analysis were briefly performed on the sample to deliver some basic ideas of the capability of the system and the bundled software package. Calculation of total VOI volume, object volume, percent of object volume, total VOI surface, object surface, object surface/volume ratio, object surface density, structure thickness, structure separation, total porosity were conducted and analysed. This paper should serve as a brief description of how the device can produce the preferred image quality as well as the ability of the bundled software packages to help in performing qualitative and quantitative analysis.
NASA Astrophysics Data System (ADS)
Pandey, Palak; Kunte, Pravin D.
2016-10-01
This study presents an easy, modular, user-friendly, and flexible software package for processing of Landsat 7 ETM and Landsat 8 OLI-TIRS data for estimating suspended particulate matter concentrations in the coastal waters. This package includes 1) algorithm developed using freely downloadable SCILAB package, 2) ERDAS Models for iterative processing of Landsat images and 3) ArcMAP tool for plotting and map making. Utilizing SCILAB package, a module is written for geometric corrections, radiometric corrections and obtaining normalized water-leaving reflectance by incorporating Landsat 8 OLI-TIRS and Landsat 7 ETM+ data. Using ERDAS models, a sequence of modules are developed for iterative processing of Landsat images and estimating suspended particulate matter concentrations. Processed images are used for preparing suspended sediment concentration maps. The applicability of this software package is demonstrated by estimating and plotting seasonal suspended sediment concentration maps off the Bengal delta. The software is flexible enough to accommodate other remotely sensed data like Ocean Color monitor (OCM) data, Indian Remote Sensing data (IRS), MODIS data etc. by replacing a few parameters in the algorithm, for estimating suspended sediment concentration in coastal waters.
Appel, R D; Palagi, P M; Walther, D; Vargas, J R; Sanchez, J C; Ravier, F; Pasquali, C; Hochstrasser, D F
1997-12-01
Although two-dimensional electrophoresis (2-DE) computer analysis software packages have existed ever since 2-DE technology was developed, it is only now that the hardware and software technology allows large-scale studies to be performed on low-cost personal computers or workstations, and that setting up a 2-DE computer analysis system in a small laboratory is no longer considered a luxury. After a first attempt in the seventies and early eighties to develop 2-DE analysis software systems on hardware that had poor or even no graphical capabilities, followed in the late eighties by a wave of innovative software developments that were possible thanks to new graphical interface standards such as XWindows, a third generation of 2-DE analysis software packages has now come to maturity. It can be run on a variety of low-cost, general-purpose personal computers, thus making the purchase of a 2-DE analysis system easily attainable for even the smallest laboratory that is involved in proteome research. Melanie II 2-D PAGE, developed at the University Hospital of Geneva, is such a third-generation software system for 2-DE analysis. Based on unique image processing algorithms, this user-friendly object-oriented software package runs on multiple platforms, including Unix, MS-Windows 95 and NT, and Power Macintosh. It provides efficient spot detection and quantitation, state-of-the-art image comparison, statistical data analysis facilities, and is Internet-ready. Linked to proteome databases such as those available on the World Wide Web, it represents a valuable tool for the "Virtual Lab" of the post-genome area.
Metlagel, Zoltan; Kikkawa, Yayoi S; Kikkawa, Masahide
2007-01-01
Helical image analysis in combination with electron microscopy has been used to study three-dimensional structures of various biological filaments or tubes, such as microtubules, actin filaments, and bacterial flagella. A number of packages have been developed to carry out helical image analysis. Some biological specimens, however, have a symmetry break (seam) in their three-dimensional structure, even though their subunits are mostly arranged in a helical manner. We refer to these objects as "asymmetric helices". All the existing packages are designed for helically symmetric specimens, and do not allow analysis of asymmetric helical objects, such as microtubules with seams. Here, we describe Ruby-Helix, a new set of programs for the analysis of "helical" objects with or without a seam. Ruby-Helix is built on top of the Ruby programming language and is the first implementation of asymmetric helical reconstruction for practical image analysis. It also allows easier and semi-automated analysis, performing iterative unbending and accurate determination of the repeat length. As a result, Ruby-Helix enables us to analyze motor-microtubule complexes with higher throughput to higher resolution.
Klein, Johannes; Leupold, Stefan; Biegler, Ilona; Biedendieck, Rebekka; Münch, Richard; Jahn, Dieter
2012-09-01
Time-lapse imaging in combination with fluorescence microscopy techniques enable the investigation of gene regulatory circuits and uncovered phenomena like culture heterogeneity. In this context, computational image processing for the analysis of single cell behaviour plays an increasing role in systems biology and mathematical modelling approaches. Consequently, we developed a software package with graphical user interface for the analysis of single bacterial cell behaviour. A new software called TLM-Tracker allows for the flexible and user-friendly interpretation for the segmentation, tracking and lineage analysis of microbial cells in time-lapse movies. The software package, including manual, tutorial video and examples, is available as Matlab code or executable binaries at http://www.tlmtracker.tu-bs.de.
Multifit / Polydefix : a framework for the analysis of polycrystal deformation using X-rays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merkel, Sébastien; Hilairet, Nadège
2015-06-27
Multifit/Polydefixis an open source IDL software package for the efficient processing of diffraction data obtained in deformation apparatuses at synchrotron beamlines.Multifitallows users to decompose two-dimensional diffraction images into azimuthal slices, fit peak positions, shapes and intensities, and propagate the results to other azimuths and images.Polydefixis for analysis of deformation experiments. Starting from output files created inMultifitor other packages, it will extract elastic lattice strains, evaluate sample pressure and differential stress, and prepare input files for further texture analysis. TheMultifit/Polydefixpackage is designed to make the tedious data analysis of synchrotron-based plasticity, rheology or other time-dependent experiments very straightforward and accessible tomore » a wider community.« less
Fang, Yu-Hua Dean; Asthana, Pravesh; Salinas, Cristian; Huang, Hsuan-Ming; Muzic, Raymond F
2010-01-01
An integrated software package, Compartment Model Kinetic Analysis Tool (COMKAT), is presented in this report. COMKAT is an open-source software package with many functions for incorporating pharmacokinetic analysis in molecular imaging research and has both command-line and graphical user interfaces. With COMKAT, users may load and display images, draw regions of interest, load input functions, select kinetic models from a predefined list, or create a novel model and perform parameter estimation, all without having to write any computer code. For image analysis, COMKAT image tool supports multiple image file formats, including the Digital Imaging and Communications in Medicine (DICOM) standard. Image contrast, zoom, reslicing, display color table, and frame summation can be adjusted in COMKAT image tool. It also displays and automatically registers images from 2 modalities. Parametric imaging capability is provided and can be combined with the distributed computing support to enhance computation speeds. For users without MATLAB licenses, a compiled, executable version of COMKAT is available, although it currently has only a subset of the full COMKAT capability. Both the compiled and the noncompiled versions of COMKAT are free for academic research use. Extensive documentation, examples, and COMKAT itself are available on its wiki-based Web site, http://comkat.case.edu. Users are encouraged to contribute, sharing their experience, examples, and extensions of COMKAT. With integrated functionality specifically designed for imaging and kinetic modeling analysis, COMKAT can be used as a software environment for molecular imaging and pharmacokinetic analysis.
Cellular Consequences of Telomere Shortening in Histologically Normal Breast Tissues
2013-09-01
using the open source, JAVA -based image analysis software package ImageJ (http://rsb.info.nih.gov/ij/) and a custom designed plugin (“Telometer...Tabulated data were stored in a MySQL (http://www.mysql.com) database and viewed through Microsoft Access (Microsoft Corp.). Statistical Analysis For
Winner, Taryn L; Lanzarotta, Adam; Sommer, André J
2016-06-01
An effective method for detecting and characterizing counterfeit finished dosage forms and packaging materials is described in this study. Using attenuated total internal reflection Fourier transform infrared spectroscopic imaging, suspect tablet coating and core formulations as well as multi-layered foil safety seals, bottle labels, and cigarette tear tapes were analyzed and compared directly with those of a stored authentic product. The approach was effective for obtaining molecular information from structures as small as 6 μm.
Wide-field Imaging System and Rapid Direction of Optical Zoom (WOZ)
2010-09-25
commercial software packages: SolidWorks, COMSOL Multiphysics, and ZEMAX optical design. SolidWorks is a computer aided design package, which as a live...interface to COMSOL. COMSOL is a finite element analysis/partial differential equation solver. ZEMAX is an optical design package. Both COMSOL and... ZEMAX have live interfaces to MatLab. Our initial investigations have enabled a model in SolidWorks to be updated in COMSOL, an FEA calculation
Hebart, Martin N.; Görgen, Kai; Haynes, John-Dylan
2015-01-01
The multivariate analysis of brain signals has recently sparked a great amount of interest, yet accessible and versatile tools to carry out decoding analyses are scarce. Here we introduce The Decoding Toolbox (TDT) which represents a user-friendly, powerful and flexible package for multivariate analysis of functional brain imaging data. TDT is written in Matlab and equipped with an interface to the widely used brain data analysis package SPM. The toolbox allows running fast whole-brain analyses, region-of-interest analyses and searchlight analyses, using machine learning classifiers, pattern correlation analysis, or representational similarity analysis. It offers automatic creation and visualization of diverse cross-validation schemes, feature scaling, nested parameter selection, a variety of feature selection methods, multiclass capabilities, and pattern reconstruction from classifier weights. While basic users can implement a generic analysis in one line of code, advanced users can extend the toolbox to their needs or exploit the structure to combine it with external high-performance classification toolboxes. The toolbox comes with an example data set which can be used to try out the various analysis methods. Taken together, TDT offers a promising option for researchers who want to employ multivariate analyses of brain activity patterns. PMID:25610393
Object-oriented design of medical imaging software.
Ligier, Y; Ratib, O; Logean, M; Girard, C; Perrier, R; Scherrer, J R
1994-01-01
A special software package for interactive display and manipulation of medical images was developed at the University Hospital of Geneva, as part of a hospital wide Picture Archiving and Communication System (PACS). This software package, called Osiris, was especially designed to be easily usable and adaptable to the needs of noncomputer-oriented physicians. The Osiris software has been developed to allow the visualization of medical images obtained from any imaging modality. It provides generic manipulation tools, processing tools, and analysis tools more specific to clinical applications. This software, based on an object-oriented paradigm, is portable and extensible. Osiris is available on two different operating systems: the Unix X-11/OSF-Motif based workstations, and the Macintosh family.
NASA Astrophysics Data System (ADS)
Chen, Kewei; Ge, Xiaolin; Yao, Li; Bandy, Dan; Alexander, Gene E.; Prouty, Anita; Burns, Christine; Zhao, Xiaojie; Wen, Xiaotong; Korn, Ronald; Lawson, Michael; Reiman, Eric M.
2006-03-01
Having approved fluorodeoxyglucose positron emission tomography (FDG PET) for the diagnosis of Alzheimer's disease (AD) in some patients, the Centers for Medicare and Medicaid Services suggested the need to develop and test analysis techniques to optimize diagnostic accuracy. We developed an automated computer package comparing an individual's FDG PET image to those of a group of normal volunteers. The normal control group includes FDG-PET images from 82 cognitively normal subjects, 61.89+/-5.67 years of age, who were characterized demographically, clinically, neuropsychologically, and by their apolipoprotein E genotype (known to be associated with a differential risk for AD). In addition, AD-affected brain regions functionally defined as based on a previous study (Alexander, et al, Am J Psychiatr, 2002) were also incorporated. Our computer package permits the user to optionally select control subjects, matching the individual patient for gender, age, and educational level. It is fully streamlined to require minimal user intervention. With one mouse click, the program runs automatically, normalizing the individual patient image, setting up a design matrix for comparing the single subject to a group of normal controls, performing the statistics, calculating the glucose reduction overlap index of the patient with the AD-affected brain regions, and displaying the findings in reference to the AD regions. In conclusion, the package automatically contrasts a single patient to a normal subject database using sound statistical procedures. With further validation, this computer package could be a valuable tool to assist physicians in decision making and communicating findings with patients and patient families.
Spotlight-8 Image Analysis Software
NASA Technical Reports Server (NTRS)
Klimek, Robert; Wright, Ted
2006-01-01
Spotlight is a cross-platform GUI-based software package designed to perform image analysis on sequences of images generated by combustion and fluid physics experiments run in a microgravity environment. Spotlight can perform analysis on a single image in an interactive mode or perform analysis on a sequence of images in an automated fashion. Image processing operations can be employed to enhance the image before various statistics and measurement operations are performed. An arbitrarily large number of objects can be analyzed simultaneously with independent areas of interest. Spotlight saves results in a text file that can be imported into other programs for graphing or further analysis. Spotlight can be run on Microsoft Windows, Linux, and Apple OS X platforms.
FTOOLS: A FITS Data Processing and Analysis Software Package
NASA Astrophysics Data System (ADS)
Blackburn, J. K.
FTOOLS, a highly modular collection of over 110 utilities for processing and analyzing data in the FITS (Flexible Image Transport System) format, has been developed in support of the HEASARC (High Energy Astrophysics Science Archive Research Center) at NASA's Goddard Space Flight Center. Each utility performs a single simple task such as presentation of file contents, extraction of specific rows or columns, appending or merging tables, binning values in a column or selecting subsets of rows based on a boolean expression. Individual utilities can easily be chained together in scripts to achieve more complex operations such as the generation and displaying of spectra or light curves. The collection of utilities provides both generic processing and analysis utilities and utilities specific to high energy astrophysics data sets used for the ASCA, ROSAT, GRO, and XTE missions. A core set of FTOOLS providing support for generic FITS data processing, FITS image analysis and timing analysis can easily be split out of the full software package for users not needing the high energy astrophysics mission utilities. The FTOOLS software package is designed to be both compatible with IRAF and completely stand alone in a UNIX or VMS environment. The user interface is controlled by standard IRAF parameter files. The package is self documenting through the IRAF help facility and a stand alone help task. Software is written in ANSI C and \\fortran to provide portability across most computer systems. The data format dependencies between hardware platforms are isolated through the FITSIO library package.
Image analysis tools and emerging algorithms for expression proteomics
English, Jane A.; Lisacek, Frederique; Morris, Jeffrey S.; Yang, Guang-Zhong; Dunn, Michael J.
2012-01-01
Since their origins in academic endeavours in the 1970s, computational analysis tools have matured into a number of established commercial packages that underpin research in expression proteomics. In this paper we describe the image analysis pipeline for the established 2-D Gel Electrophoresis (2-DE) technique of protein separation, and by first covering signal analysis for Mass Spectrometry (MS), we also explain the current image analysis workflow for the emerging high-throughput ‘shotgun’ proteomics platform of Liquid Chromatography coupled to MS (LC/MS). The bioinformatics challenges for both methods are illustrated and compared, whilst existing commercial and academic packages and their workflows are described from both a user’s and a technical perspective. Attention is given to the importance of sound statistical treatment of the resultant quantifications in the search for differential expression. Despite wide availability of proteomics software, a number of challenges have yet to be overcome regarding algorithm accuracy, objectivity and automation, generally due to deterministic spot-centric approaches that discard information early in the pipeline, propagating errors. We review recent advances in signal and image analysis algorithms in 2-DE, MS, LC/MS and Imaging MS. Particular attention is given to wavelet techniques, automated image-based alignment and differential analysis in 2-DE, Bayesian peak mixture models and functional mixed modelling in MS, and group-wise consensus alignment methods for LC/MS. PMID:21046614
Detection of micro solder balls using active thermography and probabilistic neural network
NASA Astrophysics Data System (ADS)
He, Zhenzhi; Wei, Li; Shao, Minghui; Lu, Xingning
2017-03-01
Micro solder ball/bump has been widely used in electronic packaging. It has been challenging to inspect these structures as the solder balls/bumps are often embedded between the component and substrates, especially in flip-chip packaging. In this paper, a detection method for micro solder ball/bump based on the active thermography and the probabilistic neural network is investigated. A VH680 infrared imager is used to capture the thermal image of the test vehicle, SFA10 packages. The temperature curves are processed using moving average technique to remove the peak noise. And the principal component analysis (PCA) is adopted to reconstruct the thermal images. The missed solder balls can be recognized explicitly in the second principal component image. Probabilistic neural network (PNN) is then established to identify the defective bump intelligently. The hot spots corresponding to the solder balls are segmented from the PCA reconstructed image, and statistic parameters are calculated. To characterize the thermal properties of solder bump quantitatively, three representative features are selected and used as the input vector in PNN clustering. The results show that the actual outputs and the expected outputs are consistent in identification of the missed solder balls, and all the bumps were recognized accurately, which demonstrates the viability of the PNN in effective defect inspection in high-density microelectronic packaging.
Image analysis software versus direct anthropometry for breast measurements.
Quieregatto, Paulo Rogério; Hochman, Bernardo; Furtado, Fabianne; Machado, Aline Fernanda Perez; Sabino Neto, Miguel; Ferreira, Lydia Masako
2014-10-01
To compare breast measurements performed using the software packages ImageTool(r), AutoCAD(r) and Adobe Photoshop(r) with direct anthropometric measurements. Points were marked on the breasts and arms of 40 volunteer women aged between 18 and 60 years. When connecting the points, seven linear segments and one angular measurement on each half of the body, and one medial segment common to both body halves were defined. The volunteers were photographed in a standardized manner. Photogrammetric measurements were performed by three independent observers using the three software packages and compared to direct anthropometric measurements made with calipers and a protractor. Measurements obtained with AutoCAD(r) were the most reproducible and those made with ImageTool(r) were the most similar to direct anthropometry, while measurements with Adobe Photoshop(r) showed the largest differences. Except for angular measurements, significant differences were found between measurements of line segments made using the three software packages and those obtained by direct anthropometry. AutoCAD(r) provided the highest precision and intermediate accuracy; ImageTool(r) had the highest accuracy and lowest precision; and Adobe Photoshop(r) showed intermediate precision and the worst accuracy among the three software packages.
QuantWorm: a comprehensive software package for Caenorhabditis elegans phenotypic assays.
Jung, Sang-Kyu; Aleman-Meza, Boanerges; Riepe, Celeste; Zhong, Weiwei
2014-01-01
Phenotypic assays are crucial in genetics; however, traditional methods that rely on human observation are unsuitable for quantitative, large-scale experiments. Furthermore, there is an increasing need for comprehensive analyses of multiple phenotypes to provide multidimensional information. Here we developed an automated, high-throughput computer imaging system for quantifying multiple Caenorhabditis elegans phenotypes. Our imaging system is composed of a microscope equipped with a digital camera and a motorized stage connected to a computer running the QuantWorm software package. Currently, the software package contains one data acquisition module and four image analysis programs: WormLifespan, WormLocomotion, WormLength, and WormEgg. The data acquisition module collects images and videos. The WormLifespan software counts the number of moving worms by using two time-lapse images; the WormLocomotion software computes the velocity of moving worms; the WormLength software measures worm body size; and the WormEgg software counts the number of eggs. To evaluate the performance of our software, we compared the results of our software with manual measurements. We then demonstrated the application of the QuantWorm software in a drug assay and a genetic assay. Overall, the QuantWorm software provided accurate measurements at a high speed. Software source code, executable programs, and sample images are available at www.quantworm.org. Our software package has several advantages over current imaging systems for C. elegans. It is an all-in-one package for quantifying multiple phenotypes. The QuantWorm software is written in Java and its source code is freely available, so it does not require use of commercial software or libraries. It can be run on multiple platforms and easily customized to cope with new methods and requirements.
Matsumoto, Keiichi; Endo, Keigo
2013-06-01
Two kinds of Japanese guidelines for the data acquisition protocol of oncology fluoro-D-glucose-positron emission tomography (FDG-PET)/computed tomography (CT) scans were created by the joint task force of the Japanese Society of Nuclear Medicine Technology (JSNMT) and the Japanese Society of Nuclear Medicine (JSNM), and published in Kakuigaku-Gijutsu 27(5): 425-456, 2007 and 29(2): 195-235, 2009. These guidelines aim to standardize PET image quality among facilities and different PET/CT scanner models. The objective of this study was to develop a personal computer-based performance measurement and image quality processor for the two kinds of Japanese guidelines for oncology (18)F-FDG PET/CT scans. We call this software package the "PET quality control tool" (PETquact). Microsoft Corporation's Windows(™) is used as the operating system for PETquact, which requires 1070×720 image resolution and includes 12 different applications. The accuracy was examined for numerous applications of PETquact. For example, in the sensitivity application, the system sensitivity measurement results were equivalent when comparing two PET sinograms obtained from the PETquact and the report. PETquact is suited for analysis of the two kinds of Japanese guideline, and it shows excellent spec to performance measurements and image quality analysis. PETquact can be used at any facility if the software package is installed on a laptop computer.
Analysis of simulated image sequences from sensors for restricted-visibility operations
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar
1991-01-01
A real time model of the visible output from a 94 GHz sensor, based on a radiometric simulation of the sensor, was developed. A sequence of images as seen from an aircraft as it approaches for landing was simulated using this model. Thirty frames from this sequence of 200 x 200 pixel images were analyzed to identify and track objects in the image using the Cantata image processing package within the visual programming environment provided by the Khoros software system. The image analysis operations are described.
Chandra Interactive Analysis of Observations (CIAO)
NASA Technical Reports Server (NTRS)
Dobrzycki, Adam
2000-01-01
The Chandra (formerly AXAF) telescope, launched on July 23, 1999, provides X-rays data with unprecedented spatial and spectral resolution. As part of the Chandra scientific support, the Chandra X-ray Observatory Center provides a new data analysis system, CIAO ("Chandra Interactive Analysis of Observations"). We will present the main components of the system: "First Look" analysis; SHERPA: a multi-dimensional, multi-mission modeling and fitting application; Chandra Imaging and Plotting System; Detect package-source detection algorithms; and DM package generic data manipulation tools, We will set up a demonstration of the portable version of the system and show examples of Chandra Data Analysis.
NASA Technical Reports Server (NTRS)
1990-01-01
The Ames digital image velocimetry technology has been incorporated in a commercially available image processing software package that allows motion measurement of images on a PC alone. The software, manufactured by Werner Frei Associates, is IMAGELAB FFT. IMAGELAB FFT is a general purpose image processing system with a variety of other applications, among them image enhancement of fingerprints and use by banks and law enforcement agencies for analysis of videos run during robberies.
Carlton, Holly D; Elmer, John W; Li, Yan; Pacheco, Mario; Goyal, Deepak; Parkinson, Dilworth Y; MacDowell, Alastair A
2016-04-13
Synchrotron radiation micro-tomography (SRµT) is a non-destructive three-dimensional (3D) imaging technique that offers high flux for fast data acquisition times with high spatial resolution. In the electronics industry there is serious interest in performing failure analysis on 3D microelectronic packages, many which contain multiple levels of high-density interconnections. Often in tomography there is a trade-off between image resolution and the volume of a sample that can be imaged. This inverse relationship limits the usefulness of conventional computed tomography (CT) systems since a microelectronic package is often large in cross sectional area 100-3,600 mm(2), but has important features on the micron scale. The micro-tomography beamline at the Advanced Light Source (ALS), in Berkeley, CA USA, has a setup which is adaptable and can be tailored to a sample's properties, i.e., density, thickness, etc., with a maximum allowable cross-section of 36 x 36 mm. This setup also has the option of being either monochromatic in the energy range ~7-43 keV or operating with maximum flux in white light mode using a polychromatic beam. Presented here are details of the experimental steps taken to image an entire 16 x 16 mm system within a package, in order to obtain 3D images of the system with a spatial resolution of 8.7 µm all within a scan time of less than 3 min. Also shown are results from packages scanned in different orientations and a sectioned package for higher resolution imaging. In contrast a conventional CT system would take hours to record data with potentially poorer resolution. Indeed, the ratio of field-of-view to throughput time is much higher when using the synchrotron radiation tomography setup. The description below of the experimental setup can be implemented and adapted for use with many other multi-materials.
Caldas, Victor E A; Punter, Christiaan M; Ghodke, Harshad; Robinson, Andrew; van Oijen, Antoine M
2015-10-01
Recent technical advances have made it possible to visualize single molecules inside live cells. Microscopes with single-molecule sensitivity enable the imaging of low-abundance proteins, allowing for a quantitative characterization of molecular properties. Such data sets contain information on a wide spectrum of important molecular properties, with different aspects highlighted in different imaging strategies. The time-lapsed acquisition of images provides information on protein dynamics over long time scales, giving insight into expression dynamics and localization properties. Rapid burst imaging reveals properties of individual molecules in real-time, informing on their diffusion characteristics, binding dynamics and stoichiometries within complexes. This richness of information, however, adds significant complexity to analysis protocols. In general, large datasets of images must be collected and processed in order to produce statistically robust results and identify rare events. More importantly, as live-cell single-molecule measurements remain on the cutting edge of imaging, few protocols for analysis have been established and thus analysis strategies often need to be explored for each individual scenario. Existing analysis packages are geared towards either single-cell imaging data or in vitro single-molecule data and typically operate with highly specific algorithms developed for particular situations. Our tool, iSBatch, instead allows users to exploit the inherent flexibility of the popular open-source package ImageJ, providing a hierarchical framework in which existing plugins or custom macros may be executed over entire datasets or portions thereof. This strategy affords users freedom to explore new analysis protocols within large imaging datasets, while maintaining hierarchical relationships between experiments, samples, fields of view, cells, and individual molecules.
Liu, Yijin; Meirer, Florian; Williams, Phillip A.; Wang, Junyue; Andrews, Joy C.; Pianetta, Piero
2012-01-01
Transmission X-ray microscopy (TXM) has been well recognized as a powerful tool for non-destructive investigation of the three-dimensional inner structure of a sample with spatial resolution down to a few tens of nanometers, especially when combined with synchrotron radiation sources. Recent developments of this technique have presented a need for new tools for both system control and data analysis. Here a software package developed in MATLAB for script command generation and analysis of TXM data is presented. The first toolkit, the script generator, allows automating complex experimental tasks which involve up to several thousand motor movements. The second package was designed to accomplish computationally intense tasks such as data processing of mosaic and mosaic tomography datasets; dual-energy contrast imaging, where data are recorded above and below a specific X-ray absorption edge; and TXM X-ray absorption near-edge structure imaging datasets. Furthermore, analytical and iterative tomography reconstruction algorithms were implemented. The compiled software package is freely available. PMID:22338691
Motmot, an open-source toolkit for realtime video acquisition and analysis.
Straw, Andrew D; Dickinson, Michael H
2009-07-22
Video cameras sense passively from a distance, offer a rich information stream, and provide intuitively meaningful raw data. Camera-based imaging has thus proven critical for many advances in neuroscience and biology, with applications ranging from cellular imaging of fluorescent dyes to tracking of whole-animal behavior at ecologically relevant spatial scales. Here we present 'Motmot': an open-source software suite for acquiring, displaying, saving, and analyzing digital video in real-time. At the highest level, Motmot is written in the Python computer language. The large amounts of data produced by digital cameras are handled by low-level, optimized functions, usually written in C. This high-level/low-level partitioning and use of select external libraries allow Motmot, with only modest complexity, to perform well as a core technology for many high-performance imaging tasks. In its current form, Motmot allows for: (1) image acquisition from a variety of camera interfaces (package motmot.cam_iface), (2) the display of these images with minimal latency and computer resources using wxPython and OpenGL (package motmot.wxglvideo), (3) saving images with no compression in a single-pass, low-CPU-use format (package motmot.FlyMovieFormat), (4) a pluggable framework for custom analysis of images in realtime and (5) firmware for an inexpensive USB device to synchronize image acquisition across multiple cameras, with analog input, or with other hardware devices (package motmot.fview_ext_trig). These capabilities are brought together in a graphical user interface, called 'FView', allowing an end user to easily view and save digital video without writing any code. One plugin for FView, 'FlyTrax', which tracks the movement of fruit flies in real-time, is included with Motmot, and is described to illustrate the capabilities of FView. Motmot enables realtime image processing and display using the Python computer language. In addition to the provided complete applications, the architecture allows the user to write relatively simple plugins, which can accomplish a variety of computer vision tasks and be integrated within larger software systems. The software is available at http://code.astraw.com/projects/motmot.
Phenopix: a R package to process digital images of a vegetation cover
NASA Astrophysics Data System (ADS)
Filippa, Gianluca; Cremonese, Edoardo; Migliavacca, Mirco; Galvagno, Marta; Morra di Cella, Umberto; Richardson, Andrew
2015-04-01
Plant phenology is a globally recognized indicator of the effects of climate change on the terrestrial biosphere. Accordingly, new tools to automatically track the seasonal development of a vegetation cover are becoming available and more and more deployed. Among them, near-continuous digital images are being collected in several networks in the US, Europe, Asia and Australia in a range of different ecosystems, including agricultural lands, deciduous and evergreen forests, and grasslands. The growing scientific interest in vegetation image analysis highlights the need of easy to use, flexible and standardized processing techniques. In this contribution we illustrate a new open source package called "phenopix" written in R language that allows to process images of a vegetation cover. The main features include: (i) define of one or more areas of interest on an image and process pixel information within them, (ii) compute vegetation indexes based on red green and blue channels, (iii) fit a curve to the seasonal trajectory of vegetation indexes and extract relevant dates (aka thresholds) on the seasonal trajectory; (iv) analyze image pixels separately to extract spatially explicit phenological information. The utilities of the package will be illustrated in detail for two subalpine sites, a grassland and a larch stand at about 2000 m in the Italian Western Alps. The phenopix package is a cost free and easy-to-use tool that allows to process digital images of a vegetation cover in a standardized, flexible and reproducible way. The software is available for download at the R forge web site (r-forge.r-project.org/projects/phenopix/).
Mirion--a software package for automatic processing of mass spectrometric images.
Paschke, C; Leisner, A; Hester, A; Maass, K; Guenther, S; Bouschen, W; Spengler, B
2013-08-01
Mass spectrometric imaging (MSI) techniques are of growing interest for the Life Sciences. In recent years, the development of new instruments employing ion sources that are tailored for spatial scanning allowed the acquisition of large data sets. A subsequent data processing, however, is still a bottleneck in the analytical process, as a manual data interpretation is impossible within a reasonable time frame. The transformation of mass spectrometric data into spatial distribution images of detected compounds turned out to be the most appropriate method to visualize the results of such scans, as humans are able to interpret images faster and easier than plain numbers. Image generation, thus, is a time-consuming and complex yet very efficient task. The free software package "Mirion," presented in this paper, allows the handling and analysis of data sets acquired by mass spectrometry imaging. Mirion can be used for image processing of MSI data obtained from many different sources, as it uses the HUPO-PSI-based standard data format imzML, which is implemented in the proprietary software of most of the mass spectrometer companies. Different graphical representations of the recorded data are available. Furthermore, automatic calculation and overlay of mass spectrometric images promotes direct comparison of different analytes for data evaluation. The program also includes tools for image processing and image analysis.
A comparative study of 2 computer-assisted methods of quantifying brightfield microscopy images.
Tse, George H; Marson, Lorna P
2013-10-01
Immunohistochemistry continues to be a powerful tool for the detection of antigens. There are several commercially available software packages that allow image analysis; however, these can be complex, require relatively high level of computer skills, and can be expensive. We compared 2 commonly available software packages, Adobe Photoshop CS6 and ImageJ, in their ability to quantify percentage positive area after picrosirius red (PSR) staining and 3,3'-diaminobenzidine (DAB) staining. On analysis of DAB-stained B cells in the mouse spleen, with a biotinylated primary rat anti-mouse-B220 antibody, there was no significant difference on converting images from brightfield microscopy to binary images to measure black and white pixels using ImageJ compared with measuring a range of brown pixels with Photoshop (Student t test, P=0.243, correlation r=0.985). When analyzing mouse kidney allografts stained with PSR, Photoshop achieved a greater interquartile range while maintaining a lower 10th percentile value compared with analysis with ImageJ. A lower 10% percentile reflects that Photoshop analysis is better at analyzing tissues with low levels of positive pixels; particularly relevant for control tissues or negative controls, whereas after ImageJ analysis the same images would result in spuriously high levels of positivity. Furthermore comparing the 2 methods by Bland-Altman plot revealed that these 2 methodologies did not agree when measuring images with a higher percentage of positive staining and correlation was poor (r=0.804). We conclude that for computer-assisted analysis of images of DAB-stained tissue there is no difference between using Photoshop or ImageJ. However, for analysis of color images where differentiation into a binary pattern is not easy, such as with PSR, Photoshop is superior at identifying higher levels of positivity while maintaining differentiation of low levels of positive staining.
Variations in algorithm implementation among quantitative texture analysis software packages
NASA Astrophysics Data System (ADS)
Foy, Joseph J.; Mitta, Prerana; Nowosatka, Lauren R.; Mendel, Kayla R.; Li, Hui; Giger, Maryellen L.; Al-Hallaq, Hania; Armato, Samuel G.
2018-02-01
Open-source texture analysis software allows for the advancement of radiomics research. Variations in texture features, however, result from discrepancies in algorithm implementation. Anatomically matched regions of interest (ROIs) that captured normal breast parenchyma were placed in the magnetic resonance images (MRI) of 20 patients at two time points. Six first-order features and six gray-level co-occurrence matrix (GLCM) features were calculated for each ROI using four texture analysis packages. Features were extracted using package-specific default GLCM parameters and using GLCM parameters modified to yield the greatest consistency among packages. Relative change in the value of each feature between time points was calculated for each ROI. Distributions of relative feature value differences were compared across packages. Absolute agreement among feature values was quantified by the intra-class correlation coefficient. Among first-order features, significant differences were found for max, range, and mean, and only kurtosis showed poor agreement. All six second-order features showed significant differences using package-specific default GLCM parameters, and five second-order features showed poor agreement; with modified GLCM parameters, no significant differences among second-order features were found, and all second-order features showed poor agreement. While relative texture change discrepancies existed across packages, these differences were not significant when consistent parameters were used.
PynPoint code for exoplanet imaging
NASA Astrophysics Data System (ADS)
Amara, A.; Quanz, S. P.; Akeret, J.
2015-04-01
We announce the public release of PynPoint, a Python package that we have developed for analysing exoplanet data taken with the angular differential imaging observing technique. In particular, PynPoint is designed to model the point spread function of the central star and to subtract its flux contribution to reveal nearby faint companion planets. The current version of the package does this correction by using a principal component analysis method to build a basis set for modelling the point spread function of the observations. We demonstrate the performance of the package by reanalysing publicly available data on the exoplanet β Pictoris b, which consists of close to 24,000 individual image frames. We show that PynPoint is able to analyse this typical data in roughly 1.5 min on a Mac Pro, when the number of images is reduced by co-adding in sets of 5. The main computational work, the calculation of the Singular-Value-Decomposition, parallelises well as a result of a reliance on the SciPy and NumPy packages. For this calculation the peak memory load is 6 GB, which can be run comfortably on most workstations. A simpler calculation, by co-adding over 50, takes 3 s with a peak memory usage of 600 MB. This can be performed easily on a laptop. In developing the package we have modularised the code so that we will be able to extend functionality in future releases, through the inclusion of more modules, without it affecting the users application programming interface. We distribute the PynPoint package under GPLv3 licence through the central PyPI server, and the documentation is available online (http://pynpoint.ethz.ch).
Validation of luminescent source reconstruction using spectrally resolved bioluminescence images
NASA Astrophysics Data System (ADS)
Virostko, John M.; Powers, Alvin C.; Jansen, E. D.
2008-02-01
This study examines the accuracy of the Living Image® Software 3D Analysis Package (Xenogen, Alameda, CA) in reconstruction of light source depth and intensity. Constant intensity light sources were placed in an optically homogeneous medium (chicken breast). Spectrally filtered images were taken at 560, 580, 600, 620, 640, and 660 nanometers. The Living Image® Software 3D Analysis Package was employed to reconstruct source depth and intensity using these spectrally filtered images. For sources shallower than the mean free path of light there was proportionally higher inaccuracy in reconstruction. For sources deeper than the mean free path, the average error in depth and intensity reconstruction was less than 4% and 12%, respectively. The ability to distinguish multiple sources decreased with increasing source depth and typically required a spatial separation of twice the depth. The constant intensity light sources were also implanted in mice to examine the effect of optical inhomogeneity. The reconstruction accuracy suffered in inhomogeneous tissue with accuracy influenced by the choice of optical properties used in reconstruction.
NASA Technical Reports Server (NTRS)
Squyres, S. W.
1993-01-01
The MESUR mission will place a network of small, robust landers on the Martian surface, making a coordinated set of observations for at least one Martian year. MESUR presents some major challenges for development of instruments, instrument deployment systems, and on board data processing techniques. The instrument payload has not yet been selected, but the straw man payload is (1) a three-axis seismometer; (2) a meteorology package that senses pressure, temperature, wind speed and direction, humidity, and sky brightness; (3) an alphaproton-X-ray spectrometer (APXS); (4) a thermal analysis/evolved gas analysis (TA/EGA) instrument; (5) a descent imager, (6) a panoramic surface imager; (7) an atmospheric structure instrument (ASI) that senses pressure, temperature, and acceleration during descent to the surface; and (8) radio science. Because of the large number of landers to be sent (about 16), all these instruments must be very lightweight. All but the descent imager and the ASI must survive landing loads that may approach 100 g. The meteorology package, seismometer, and surface imager must be able to survive on the surface for at least one Martian year. The seismometer requires deployment off the lander body. The panoramic imager and some components of the meteorology package require deployment above the lander body. The APXS must be placed directly against one or more rocks near the lander, prompting consideration of a micro rover for deployment of this instrument. The TA/EGA requires a system to acquire, contain, and heat a soil sample. Both the imagers and, especially, the seismometer will be capable of producing large volumes of data, and will require use of sophisticated data compression techniques.
AMIDE: a free software tool for multimodality medical image analysis.
Loening, Andreas Markus; Gambhir, Sanjiv Sam
2003-07-01
Amide's a Medical Image Data Examiner (AMIDE) has been developed as a user-friendly, open-source software tool for displaying and analyzing multimodality volumetric medical images. Central to the package's abilities to simultaneously display multiple data sets (e.g., PET, CT, MRI) and regions of interest is the on-demand data reslicing implemented within the program. Data sets can be freely shifted, rotated, viewed, and analyzed with the program automatically handling interpolation as needed from the original data. Validation has been performed by comparing the output of AMIDE with that of several existing software packages. AMIDE runs on UNIX, Macintosh OS X, and Microsoft Windows platforms, and it is freely available with source code under the terms of the GNU General Public License.
Appearance Matters: Neural Correlates of Food Choice and Packaging Aesthetics
Van der Laan, Laura N.; De Ridder, Denise T. D.; Viergever, Max A.; Smeets, Paul A. M.
2012-01-01
Neuro-imaging holds great potential for predicting choice behavior from brain responses. In this study we used both traditional mass-univariate and state-of-the-art multivariate pattern analysis to establish which brain regions respond to preferred packages and to what extent neural activation patterns can predict realistic low-involvement consumer choices. More specifically, this was assessed in the context of package-induced binary food choices. Mass-univariate analyses showed that several regions, among which the bilateral striatum, were more strongly activated in response to preferred food packages. Food choices could be predicted with an accuracy of up to 61.2% by activation patterns in brain regions previously found to be involved in healthy food choices (superior frontal gyrus) and visual processing (middle occipital gyrus). In conclusion, this study shows that mass-univariate analysis can detect small package-induced differences in product preference and that MVPA can successfully predict realistic low-involvement consumer choices from functional MRI data. PMID:22848586
Analysis of live cell images: Methods, tools and opportunities.
Nketia, Thomas A; Sailem, Heba; Rohde, Gustavo; Machiraju, Raghu; Rittscher, Jens
2017-02-15
Advances in optical microscopy, biosensors and cell culturing technologies have transformed live cell imaging. Thanks to these advances live cell imaging plays an increasingly important role in basic biology research as well as at all stages of drug development. Image analysis methods are needed to extract quantitative information from these vast and complex data sets. The aim of this review is to provide an overview of available image analysis methods for live cell imaging, in particular required preprocessing image segmentation, cell tracking and data visualisation methods. The potential opportunities recent advances in machine learning, especially deep learning, and computer vision provide are being discussed. This review includes overview of the different available software packages and toolkits. Copyright © 2017. Published by Elsevier Inc.
Digital PIV (DPIV) Software Analysis System
NASA Technical Reports Server (NTRS)
Blackshire, James L.
1997-01-01
A software package was developed to provide a Digital PIV (DPIV) capability for NASA LaRC. The system provides an automated image capture, test correlation, and autocorrelation analysis capability for the Kodak Megaplus 1.4 digital camera system for PIV measurements. The package includes three separate programs that, when used together with the PIV data validation algorithm, constitutes a complete DPIV analysis capability. The programs are run on an IBM PC/AT host computer running either Microsoft Windows 3.1 or Windows 95 using a 'quickwin' format that allows simple user interface and output capabilities to the windows environment.
Analysis of objects in binary images. M.S. Thesis - Old Dominion Univ.
NASA Technical Reports Server (NTRS)
Leonard, Desiree M.
1991-01-01
Digital image processing techniques are typically used to produce improved digital images through the application of successive enhancement techniques to a given image or to generate quantitative data about the objects within that image. In support of and to assist researchers in a wide range of disciplines, e.g., interferometry, heavy rain effects on aerodynamics, and structure recognition research, it is often desirable to count objects in an image and compute their geometric properties. Therefore, an image analysis application package, focusing on a subset of image analysis techniques used for object recognition in binary images, was developed. This report describes the techniques and algorithms utilized in three main phases of the application and are categorized as: image segmentation, object recognition, and quantitative analysis. Appendices provide supplemental formulas for the algorithms employed as well as examples and results from the various image segmentation techniques and the object recognition algorithm implemented.
Subband/Transform MATLAB Functions For Processing Images
NASA Technical Reports Server (NTRS)
Glover, D.
1995-01-01
SUBTRANS software is package of routines implementing image-data-processing functions for use with MATLAB*(TM) software. Provides capability to transform image data with block transforms and to produce spatial-frequency subbands of transformed data. Functions cascaded to provide further decomposition into more subbands. Also used in image-data-compression systems. For example, transforms used to prepare data for lossy compression. Written for use in MATLAB mathematical-analysis environment.
Application of dual-energy x-ray techniques for automated food container inspection
NASA Astrophysics Data System (ADS)
Shashishekhar, N.; Veselitza, D.
2016-02-01
Manufacturing for plastic food containers often results in small metal particles getting into the containers during the production process. Metal detectors are usually not sensitive enough to detect these metal particles (0.5 mm or lesser), especially when the containers are stacked in large sealed shipping packages; X-ray inspection of these packages provides a viable alternative. This paper presents the results of an investigation into dual-energy X-ray techniques for automated detection of small metal particles in plastic food container packages. The sample packages consist of sealed cardboard boxes containing stacks of food containers: plastic cups for food, and Styrofoam cups for noodles. The primary goal of the investigation was to automatically identify small metal particles down to 0.5 mm diameter in size or less, randomly located within the containers. The multiple container stacks in each box make it virtually impossible to reliably detect the particles with single-energy X-ray techniques either visually or with image processing. The stacks get overlaid in the X-ray image and create many indications almost identical in contrast and size to real metal particles. Dual-energy X-ray techniques were investigated and found to result in a clear separation of the metal particles from the food container stack-ups. Automated image analysis of the resulting images provides reliable detection of the small metal particles.
Schmitter, Daniel; Wachowicz, Paulina; Sage, Daniel; Chasapi, Anastasia; Xenarios, Ioannis; Simanis; Unser, Michael
2013-01-01
The yeast Schizosaccharomyces pombe is frequently used as a model for studying the cell cycle. The cells are rod-shaped and divide by medial fission. The process of cell division, or cytokinesis, is controlled by a network of signaling proteins called the Septation Initiation Network (SIN); SIN proteins associate with the SPBs during nuclear division (mitosis). Some SIN proteins associate with both SPBs early in mitosis, and then display strongly asymmetric signal intensity at the SPBs in late mitosis, just before cytokinesis. This asymmetry is thought to be important for correct regulation of SIN signaling, and coordination of cytokinesis and mitosis. In order to study the dynamics of organelles or large protein complexes such as the spindle pole body (SPB), which have been labeled with a fluorescent protein tag in living cells, a number of the image analysis problems must be solved; the cell outline must be detected automatically, and the position and signal intensity associated with the structures of interest within the cell must be determined. We present a new 2D and 3D image analysis system that permits versatile and robust analysis of motile, fluorescently labeled structures in rod-shaped cells. We have designed an image analysis system that we have implemented as a user-friendly software package allowing the fast and robust image-analysis of large numbers of rod-shaped cells. We have developed new robust algorithms, which we combined with existing methodologies to facilitate fast and accurate analysis. Our software permits the detection and segmentation of rod-shaped cells in either static or dynamic (i.e. time lapse) multi-channel images. It enables tracking of two structures (for example SPBs) in two different image channels. For 2D or 3D static images, the locations of the structures are identified, and then intensity values are extracted together with several quantitative parameters, such as length, width, cell orientation, background fluorescence and the distance between the structures of interest. Furthermore, two kinds of kymographs of the tracked structures can be established, one representing the migration with respect to their relative position, the other representing their individual trajectories inside the cell. This software package, called "RodCellJ", allowed us to analyze a large number of S. pombe cells to understand the rules that govern SIN protein asymmetry. (Continued on next page) (Continued from previous page). "RodCellJ" is freely available to the community as a package of several ImageJ plugins to simultaneously analyze the behavior of a large number of rod-shaped cells in an extensive manner. The integration of different image-processing techniques in a single package, as well as the development of novel algorithms does not only allow to speed up the analysis with respect to the usage of existing tools, but also accounts for higher accuracy. Its utility was demonstrated on both 2D and 3D static and dynamic images to study the septation initiation network of the yeast Schizosaccharomyces pombe. More generally, it can be used in any kind of biological context where fluorescent-protein labeled structures need to be analyzed in rod-shaped cells. RodCellJ is freely available under http://bigwww.epfl.ch/algorithms.html.
De la Torre, Fernando; Chu, Wen-Sheng; Xiong, Xuehan; Vicente, Francisco; Ding, Xiaoyu; Cohn, Jeffrey
2016-01-01
Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/. PMID:27346987
De la Torre, Fernando; Chu, Wen-Sheng; Xiong, Xuehan; Vicente, Francisco; Ding, Xiaoyu; Cohn, Jeffrey
2015-05-01
Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/.
Mossotti, Victor G.; Eldeeb, A. Raouf
2000-01-01
Turcotte, 1997, and Barton and La Pointe, 1995, have identified many potential uses for the fractal dimension in physicochemical models of surface properties. The image-analysis program described in this report is an extension of the program set MORPH-I (Mossotti and others, 1998), which provided the fractal analysis of electron-microscope images of pore profiles (Mossotti and Eldeeb, 1992). MORPH-II, an integration of the modified kernel of the program MORPH-I with image calibration and editing facilities, was designed to measure the fractal dimension of the exposed surfaces of stone specimens as imaged in cross section in an electron microscope.
Microscopy image segmentation tool: Robust image data analysis
NASA Astrophysics Data System (ADS)
Valmianski, Ilya; Monton, Carlos; Schuller, Ivan K.
2014-03-01
We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.
Methods in Astronomical Image Processing
NASA Astrophysics Data System (ADS)
Jörsäter, S.
A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future
Optical smart packaging to reduce transmitted information.
Cabezas, Luisa; Tebaldi, Myrian; Barrera, John Fredy; Bolognini, Néstor; Torroba, Roberto
2012-01-02
We demonstrate a smart image-packaging optical technique that uses what we believe is a new concept to save byte space when transmitting data. The technique supports a large set of images mapped into modulated speckle patterns. Then, they are multiplexed into a single package. This operation results in a substantial decreasing of the final amount of bytes of the package with respect to the amount resulting from the addition of the images without using the method. Besides, there are no requirements on the type of images to be processed. We present results that proof the potentiality of the technique.
Effects of and attention to graphic warning labels on cigarette packages.
Süssenbach, Philipp; Niemeier, Sarah; Glock, Sabine
2013-01-01
The present study investigates the effects of graphic cigarette warnings compared to text-only cigarette warnings on smokers' explicit (i.e. ratings of the packages, cognitions about smoking, perceived health risk, quit intentions) and implicit attitudes. In addition, participants' visual attention towards the graphic warnings was recorded using eye-tracking methodology. Sixty-three smokers participated in the present study and either viewed graphic cigarette warnings with aversive and non-aversive images or text-only warnings. Data were analysed using analysis of variance and correlation analysis. Especially, graphic cigarette warnings with aversive content drew attention and elicited high threat. However, whereas attention directed to the textual information of the graphic warnings predicted smokers' risk perceptions, attention directed to the images of the graphic warnings did not. Moreover, smokers' in the graphic warning condition reported more positive cognitions about smoking, thus revealing cognitive dissonance. Smokers employ defensive psychological mechanisms when confronted with threatening warnings. Although aversive images attract attention, they do not promote health knowledge. Implications for graphic health warnings and the importance of taking their content (i.e. aversive vs. non-aversive images) into account are discussed.
ELAS: A powerful, general purpose image processing package
NASA Technical Reports Server (NTRS)
Walters, David; Rickman, Douglas
1991-01-01
ELAS is a software package which has been utilized as an image processing tool for more than a decade. It has been the source of several commercial packages. Now available on UNIX workstations it is a very powerful, flexible set of software. Applications at Stennis Space Center have included a very wide range of areas including medicine, forestry, geology, ecological modeling, and sonar imagery. It remains one of the most powerful image processing packages available, either commercially or in the public domain.
Meteorological Instruction Software
NASA Technical Reports Server (NTRS)
1990-01-01
At Florida State University and the Naval Postgraduate School, meteorology students have the opportunity to apply theoretical studies to current weather phenomena, even prepare forecasts and see how their predictions stand up utilizing GEMPAK. GEMPAK can display data quickly in both conventional and non-traditional ways, allowing students to view multiple perspectives of the complex three-dimensional atmospheric structure. With GEMPAK, mathematical equations come alive as students do homework and laboratory assignments on the weather events happening around them. Since GEMPAK provides data on a 'today' basis, each homework assignment is new. At the Naval Postgraduate School, students are now using electronically-managed environmental data in the classroom. The School's Departments of Meteorology and Oceanography have developed the Interactive Digital Environment Analysis (IDEA) Laboratory. GEMPAK is the IDEA Lab's general purpose display package; the IDEA image processing package is a modified version of NASA's Device Management System. Bringing the graphic and image processing packages together is NASA's product, the Transportable Application Executive (TAE).
Effectiveness of X-ray grating interferometry for non-destructive inspection of packaged devices
NASA Astrophysics Data System (ADS)
Uehara, Masato; Yashiro, Wataru; Momose, Atsushi
2013-10-01
It is difficult to inspect packaged devices such as IC packages and power modules because the devices contain various components, such as semiconductors, metals, ceramics, and resin. In this paper, we demonstrated the effectiveness of X-ray grating interferometry (XGI) using a laboratory X-ray tube for the industrial inspection of packaged devices. The obtained conventional absorption image showed heavy-elemental components such as metal wires and electrodes, but the image did not reveal the defects in the light-elemental components. On the other hand, the differential phase-contrast image obtained by XGI revealed microvoids and scars in the encapsulant of the samples. The visibility contrast image also obtained by XGI showed some cracks in the ceramic insulator of power module sample. In addition, the image showed the silicon plate surrounded by the encapsulant having the same X-ray absorption coefficient. While these defects and components are invisible in the conventional industrial X-ray imaging, XGI thus has an attractive potential for the industrial inspection of the packaged devices.
SpectraPLOT, Visualization Package with a User-Friendly Graphical Interface
NASA Astrophysics Data System (ADS)
Sebald, James; Macfarlane, Joseph; Golovkin, Igor
2017-10-01
SPECT3D is a collisional-radiative spectral analysis package designed to compute detailed emission, absorption, or x-ray scattering spectra, filtered images, XRD signals, and other synthetic diagnostics. The spectra and images are computed for virtual detectors by post-processing the results of hydrodynamics simulations in 1D, 2D, and 3D geometries. SPECT3D can account for a variety of instrumental response effects so that direct comparisons between simulations and experimental measurements can be made. SpectraPLOT is a user-friendly graphical interface for viewing a wide variety of results from SPECT3D simulations, and applying various instrumental effects to the simulated images and spectra. We will present SpectraPLOT's ability to display a variety of data, including spectra, images, light curves, streaked spectra, space-resolved spectra, and drilldown plasma property plots, for an argon-doped capsule implosion experiment example. Future SpectraPLOT features and enhancements will also be discussed.
Rebollar, Rubén; Gil, Ignacio; Lidón, Iván; Martín, Javier; Fernández, María J; Rivera, Sandra
2017-09-01
This paper analyses the influence that certain aspects of packaging design have on the consumer expectations of a series of sensory and non-sensory attributes and on willingness to buy for a bag of crisps in Spain. A two-part experiment was conducted in which 174 people evaluated the attributes for different stimuli using an online survey. In the first part, four stimuli were created in which two factors were varied: the packaging material and the image displayed. Interaction was identified between both factors for the attributes Crunchy, High quality and Artisan. For the attributes Salty, Crunchy and Willingness to buy, the image was the only significant factor, with the image displaying crisps ready for consumption being the only one that obtained higher scores. For the attribute Intense flavour, no statistically significant differences were identified among the stimuli. In general terms, the image displayed on the bag had a greater influence than the material from which the bag was made. In the second part, an analysis was made of the most effective way (visual cues versus verbal cues) to transmit the information that the crisps were fried in olive oil. To this end, two stimuli were designed: one displaying an image of an oil cruet and another with an allusive text. For all the attributes (Intense flavour, Crunchy, Artisan, High quality, Healthy and Willingness to buy), higher scores were obtained with the image than with the text. These results have important implications for crisps producers, marketers and packaging designers. Copyright © 2017 Elsevier Ltd. All rights reserved.
Cigarette package design: opportunities for disease prevention.
Difranza, J R; Clark, D M; Pollay, R W
2002-06-15
To learn how cigarette packages are designed and to determine to what extent cigarette packages are designed to target children. A computer search was made of all Internet websites that post tobacco industry documents using the search terms: packaging, package design, package study, box design, logo, trademark and design study. All documents were retrieved electronically and analyzed by the first author for recurrent themes. Cigarette manufacturers devote a great deal of attention and expense to package design because it is central to their efforts to create brand images. Colors, graphic elements, proportioning, texture, materials and typography are tested and used in various combinations to create the desired product and user images. Designs help to create the perceived product attributes and project a personality image of the user with the intent of fulfilling the psychological needs of the targeted type of smoker. The communication of these images and attributes is conducted through conscious and subliminal processes. Extensive testing is conducted using a variety of qualitative and quantitative research techniques. The promotion of tobacco products through appealing imagery cannot be stopped without regulating the package design. The same marketing research techniques used by the tobacco companies can be used to design generic packaging and more effective warning labels targeted at specific consumers.
Cigarette package design: opportunities for disease prevention
DiFranza, JR; Clark, DM; Pollay, RW
2003-01-01
Objective To learn how cigarette packages are designed and to determine to what extent cigarette packages are designed to target children. Methods A computer search was made of all Internet websites that post tobacco industry documents using the search terms: packaging, package design, package study, box design, logo, trademark and design study. All documents were retrieved electronically and analyzed by the first author for recurrent themes. Data Synthesis Cigarette manufacturers devote a great deal of attention and expense to package design because it is central to their efforts to create brand images. Colors, graphic elements, proportioning, texture, materials and typography are tested and used in various combinations to create the desired product and user images. Designs help to create the perceived product attributes and project a personality image of the user with the intent of fulfilling the psychological needs of the targeted type of smoker. The communication of these images and attributes is conducted through conscious and subliminal processes. Extensive testing is conducted using a variety of qualitative and quantitative research techniques. Conclusion The promotion of tobacco products through appealing imagery cannot be stopped without regulating the package design. The same marketing research techniques used by the tobacco companies can be used to design generic packaging and more effective warning labels targeted at specific consumers. PMID:19570250
Cigarette package design: opportunities for disease prevention
DiFranza, JR; Clark, DM; Pollay, RW
2003-01-01
Objective To learn how cigarette packages are designed and to determine to what extent cigarette packages are designed to target children. Methods A computer search was made of all Internet websites that post tobacco industry documents using the search terms: packaging, package design, package study, box design, logo, trademark and design study. All documents were retrieved electronically and analyzed by the first author for recurrent themes. Data Synthesis Cigarette manufacturers devote a great deal of attention and expense to package design because it is central to their efforts to create brand images. Colors, graphic elements, proportioning, texture, materials and typography are tested and used in various combinations to create the desired product and user images. Designs help to create the perceived product attributes and project a personality image of the user with the intent of fulfilling the psychological needs of the targeted type of smoker. The communication of these images and attributes is conducted through conscious and subliminal processes. Extensive testing is conducted using a variety of qualitative and quantitative research techniques. Conclusion The promotion of tobacco products through appealing imagery cannot be stopped without regulating the package design. The same marketing research techniques used by the tobacco companies can be used to design generic packaging and more effective warning labels targeted at specific consumers.
The Spectral Image Processing System (SIPS): Software for integrated analysis of AVIRIS data
NASA Technical Reports Server (NTRS)
Kruse, F. A.; Lefkoff, A. B.; Boardman, J. W.; Heidebrecht, K. B.; Shapiro, A. T.; Barloon, P. J.; Goetz, A. F. H.
1992-01-01
The Spectral Image Processing System (SIPS) is a software package developed by the Center for the Study of Earth from Space (CSES) at the University of Colorado, Boulder, in response to a perceived need to provide integrated tools for analysis of imaging spectrometer data both spectrally and spatially. SIPS was specifically designed to deal with data from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and the High Resolution Imaging Spectrometer (HIRIS), but was tested with other datasets including the Geophysical and Environmental Research Imaging Spectrometer (GERIS), GEOSCAN images, and Landsat TM. SIPS was developed using the 'Interactive Data Language' (IDL). It takes advantage of high speed disk access and fast processors running under the UNIX operating system to provide rapid analysis of entire imaging spectrometer datasets. SIPS allows analysis of single or multiple imaging spectrometer data segments at full spatial and spectral resolution. It also allows visualization and interactive analysis of image cubes derived from quantitative analysis procedures such as absorption band characterization and spectral unmixing. SIPS consists of three modules: SIPS Utilities, SIPS_View, and SIPS Analysis. SIPS version 1.1 is described below.
AstroImageJ: Image Processing and Photometric Extraction for Ultra-precise Astronomical Light Curves
NASA Astrophysics Data System (ADS)
Collins, Karen A.; Kielkopf, John F.; Stassun, Keivan G.; Hessman, Frederic V.
2017-02-01
ImageJ is a graphical user interface (GUI) driven, public domain, Java-based, software package for general image processing traditionally used mainly in life sciences fields. The image processing capabilities of ImageJ are useful and extendable to other scientific fields. Here we present AstroImageJ (AIJ), which provides an astronomy specific image display environment and tools for astronomy specific image calibration and data reduction. Although AIJ maintains the general purpose image processing capabilities of ImageJ, AIJ is streamlined for time-series differential photometry, light curve detrending and fitting, and light curve plotting, especially for applications requiring ultra-precise light curves (e.g., exoplanet transits). AIJ reads and writes standard Flexible Image Transport System (FITS) files, as well as other common image formats, provides FITS header viewing and editing, and is World Coordinate System aware, including an automated interface to the astrometry.net web portal for plate solving images. AIJ provides research grade image calibration and analysis tools with a GUI driven approach, and easily installed cross-platform compatibility. It enables new users, even at the level of undergraduate student, high school student, or amateur astronomer, to quickly start processing, modeling, and plotting astronomical image data with one tightly integrated software package.
Klepacz, Naomi A; Nash, Robert A; Egan, M Bernadette; Hodgkins, Charo E; Raats, Monique M
2016-08-01
Images on food and dietary supplement packaging might lead people to infer (appropriately or inappropriately) certain health benefits of those products. Research on this issue largely involves direct questions, which could (a) elicit inferences that would not be made unprompted, and (b) fail to capture inferences made implicitly. Using a novel memory-based method, in the present research, we explored whether packaging imagery elicits health inferences without prompting, and the extent to which these inferences are made implicitly. In 3 experiments, participants saw fictional product packages accompanied by written claims. Some packages contained an image that implied a health-related function (e.g., a brain), and some contained no image. Participants studied these packages and claims, and subsequently their memory for seen and unseen claims were tested. When a health image was featured on a package, participants often subsequently recognized health claims that-despite being implied by the image-were not truly presented. In Experiment 2, these recognition errors persisted despite an explicit warning against treating the images as informative. In Experiment 3, these findings were replicated in a large consumer sample from 5 European countries, and with a cued-recall test. These findings confirm that images can act as health claims, by leading people to infer health benefits without prompting. These inferences appear often to be implicit, and could therefore be highly pervasive. The data underscore the importance of regulating imagery on product packaging; memory-based methods represent innovative ways to measure how leading (or misleading) specific images can be. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
VIP: Vortex Image Processing Package for High-contrast Direct Imaging
NASA Astrophysics Data System (ADS)
Gomez Gonzalez, Carlos Alberto; Wertz, Olivier; Absil, Olivier; Christiaens, Valentin; Defrère, Denis; Mawet, Dimitri; Milli, Julien; Absil, Pierre-Antoine; Van Droogenbroeck, Marc; Cantalloube, Faustine; Hinz, Philip M.; Skemer, Andrew J.; Karlsson, Mikael; Surdej, Jean
2017-07-01
We present the Vortex Image Processing (VIP) library, a python package dedicated to astronomical high-contrast imaging. Our package relies on the extensive python stack of scientific libraries and aims to provide a flexible framework for high-contrast data and image processing. In this paper, we describe the capabilities of VIP related to processing image sequences acquired using the angular differential imaging (ADI) observing technique. VIP implements functionalities for building high-contrast data processing pipelines, encompassing pre- and post-processing algorithms, potential source position and flux estimation, and sensitivity curve generation. Among the reference point-spread function subtraction techniques for ADI post-processing, VIP includes several flavors of principal component analysis (PCA) based algorithms, such as annular PCA and incremental PCA algorithms capable of processing big datacubes (of several gigabytes) on a computer with limited memory. Also, we present a novel ADI algorithm based on non-negative matrix factorization, which comes from the same family of low-rank matrix approximations as PCA and provides fairly similar results. We showcase the ADI capabilities of the VIP library using a deep sequence on HR 8799 taken with the LBTI/LMIRCam and its recently commissioned L-band vortex coronagraph. Using VIP, we investigated the presence of additional companions around HR 8799 and did not find any significant additional point source beyond the four known planets. VIP is available at http://github.com/vortex-exoplanet/VIP and is accompanied with Jupyter notebook tutorials illustrating the main functionalities of the library.
Failure Analysis of CCD Image Sensors Using SQUID and GMR Magnetic Current Imaging
NASA Technical Reports Server (NTRS)
Felt, Frederick S.
2005-01-01
During electrical testing of a Full Field CCD Image Senor, electrical shorts were detected on three of six devices. These failures occurred after the parts were soldered to the PCB. Failure analysis was performed to determine the cause and locations of these failures on the devices. After removing the fiber optic faceplate, optical inspection was performed on the CCDs to understand the design and package layout. Optical inspection revealed that the device had a light shield ringing the CCD array. This structure complicated the failure analysis. Alternate methods of analysis were considered, including liquid crystal, light and thermal emission, LT/A, TT/A SQUID, and MP. Of these, SQUID and MP techniques were pursued for further analysis. Also magnetoresistive current imaging technology is discussed and compared to SQUID.
WGCNA: an R package for weighted correlation network analysis.
Langfelder, Peter; Horvath, Steve
2008-12-29
Correlation networks are increasingly being used in bioinformatics applications. For example, weighted gene co-expression network analysis is a systems biology method for describing the correlation patterns among genes across microarray samples. Weighted correlation network analysis (WGCNA) can be used for finding clusters (modules) of highly correlated genes, for summarizing such clusters using the module eigengene or an intramodular hub gene, for relating modules to one another and to external sample traits (using eigengene network methodology), and for calculating module membership measures. Correlation networks facilitate network based gene screening methods that can be used to identify candidate biomarkers or therapeutic targets. These methods have been successfully applied in various biological contexts, e.g. cancer, mouse genetics, yeast genetics, and analysis of brain imaging data. While parts of the correlation network methodology have been described in separate publications, there is a need to provide a user-friendly, comprehensive, and consistent software implementation and an accompanying tutorial. The WGCNA R software package is a comprehensive collection of R functions for performing various aspects of weighted correlation network analysis. The package includes functions for network construction, module detection, gene selection, calculations of topological properties, data simulation, visualization, and interfacing with external software. Along with the R package we also present R software tutorials. While the methods development was motivated by gene expression data, the underlying data mining approach can be applied to a variety of different settings. The WGCNA package provides R functions for weighted correlation network analysis, e.g. co-expression network analysis of gene expression data. The R package along with its source code and additional material are freely available at http://www.genetics.ucla.edu/labs/horvath/CoexpressionNetwork/Rpackages/WGCNA.
WGCNA: an R package for weighted correlation network analysis
Langfelder, Peter; Horvath, Steve
2008-01-01
Background Correlation networks are increasingly being used in bioinformatics applications. For example, weighted gene co-expression network analysis is a systems biology method for describing the correlation patterns among genes across microarray samples. Weighted correlation network analysis (WGCNA) can be used for finding clusters (modules) of highly correlated genes, for summarizing such clusters using the module eigengene or an intramodular hub gene, for relating modules to one another and to external sample traits (using eigengene network methodology), and for calculating module membership measures. Correlation networks facilitate network based gene screening methods that can be used to identify candidate biomarkers or therapeutic targets. These methods have been successfully applied in various biological contexts, e.g. cancer, mouse genetics, yeast genetics, and analysis of brain imaging data. While parts of the correlation network methodology have been described in separate publications, there is a need to provide a user-friendly, comprehensive, and consistent software implementation and an accompanying tutorial. Results The WGCNA R software package is a comprehensive collection of R functions for performing various aspects of weighted correlation network analysis. The package includes functions for network construction, module detection, gene selection, calculations of topological properties, data simulation, visualization, and interfacing with external software. Along with the R package we also present R software tutorials. While the methods development was motivated by gene expression data, the underlying data mining approach can be applied to a variety of different settings. Conclusion The WGCNA package provides R functions for weighted correlation network analysis, e.g. co-expression network analysis of gene expression data. The R package along with its source code and additional material are freely available at . PMID:19114008
ESO/ST-ECF Data Analysis Workshop, 5th, Garching, Germany, Apr. 26, 27, 1993, Proceedings
NASA Astrophysics Data System (ADS)
Grosbol, Preben; de Ruijsscher, Resy
1993-01-01
Various papers on astronomical data analysis are presented. Individual optics addressed include: surface photometry of early-type galaxies, wavelet transform and adaptive filtering, package for surface photometry of galaxies, calibration of large-field mosaics, surface photometry of galaxies with HST, wavefront-supported image deconvolution, seeing effects on elliptical galaxies, multiple algorithms deconvolution program, enhancement of Skylab X-ray images, MIDAS procedures for the image analysis of E-S0 galaxies, photometric data reductions under MIDAS, crowded field photometry with deconvolved images, the DENIS Deep Near Infrared Survey. Also discussed are: analysis of astronomical time series, detection of low-amplitude stellar pulsations, new SOT method for frequency analysis, chaotic attractor reconstruction and applications to variable stars, reconstructing a 1D signal from irregular samples, automatic analysis for time series with large gaps, prospects for content-based image retrieval, redshift survey in the South Galactic Pole Region.
The comet moment as a measure of DNA damage in the comet assay.
Kent, C R; Eady, J J; Ross, G M; Steel, G G
1995-06-01
The development of rapid assays of radiation-induced DNA damage requires the definition of reliable parameters for the evaluation of dose-response relationships to compare with cellular endpoints. We have used the single-cell gel electrophoresis (SCGE) or 'comet' assay to measure DNA damage in individual cells after irradiation. Both the alkaline and neutral protocols were used. In both cases, DNA was stained with ethidium bromide and viewed using a fluorescence microscope at 516-560 nm. Images of comets were stored as 512 x 512 pixel images using OPTIMAS, an image analysis software package. Using this software we tested various parameters for measuring DNA damage. We have developed a method of analysis that rigorously conforms to the mathematical definition of the moment of inertia of a plane figure. This parameter does not require the identification of separate head and tail regions, but rather calculates a moment of the whole comet image. We have termed this parameter 'comet moment'. This method is simple to calculate and can be performed using most image analysis software packages that support macro facilities. In experiments on CHO-K1 cells, tail length was found to increase linearly with dose, but plateaued at higher doses. Comet moment also increased linearly with dose, but over a larger dose range than tail length and had no tendency to plateau.
Tondare, Vipin N; Villarrubia, John S; Vlada R, András E
2017-10-01
Three-dimensional (3D) reconstruction of a sample surface from scanning electron microscope (SEM) images taken at two perspectives has been known for decades. Nowadays, there exist several commercially available stereophotogrammetry software packages. For testing these software packages, in this study we used Monte Carlo simulated SEM images of virtual samples. A virtual sample is a model in a computer, and its true dimensions are known exactly, which is impossible for real SEM samples due to measurement uncertainty. The simulated SEM images can be used for algorithm testing, development, and validation. We tested two stereophotogrammetry software packages and compared their reconstructed 3D models with the known geometry of the virtual samples used to create the simulated SEM images. Both packages performed relatively well with simulated SEM images of a sample with a rough surface. However, in a sample containing nearly uniform and therefore low-contrast zones, the height reconstruction error was ≈46%. The present stereophotogrammetry software packages need further improvement before they can be used reliably with SEM images with uniform zones.
Re-design of apple pia packaging using quality function deployment method
NASA Astrophysics Data System (ADS)
Pulungan, M. H.; Nadira, N.; Dewi, I. A.
2018-03-01
This study was aimed to identify the attributes for premium apple pia packaging, to determine the technical response to be carried out by Permata Agro Mandiri Small and Medium Enterprise (SME) and to design a new apple pie packaging acceptable by the SME. The Quality Function Deployment (QFD) method was employed to improve the apple pia packaging design, which consisted of seven stages in data analysis. The results indicated that whats attribute required by the costumers include graphic design, dimensions, capacity, shape, strength, and resistance of packaging. While, the technical responses to be conducted by the SMEs were as follows: attractive visual packaging designs, attractive colors, clear images and information, packaging size dimensions, a larger capacity packaging (more product content), ergonomic premium packaging, not easily torn, and impact resistant packaging materials. The findings further confirmed that the design of premium apple pia packaging accepted by the SMES was the one with the capacity of ten apple pia or 200 g weight, and with rectangular or beam shape form. The packaging material used was a duplex carton with 400 grammage (g/m2), the outer part of the packaging was coated with plastic and the inside was added with duplex carton. The acceptable packaging dimension was 30 cm x 5 cm x 3 cm (L x W x H) with a mix of black and yellow color in the graphical design.
FTOOLS: A general package of software to manipulate FITS files
NASA Astrophysics Data System (ADS)
Blackburn, J. K.; Shaw, R. A.; Payne, H. E.; Hayes, J. J. E.; Heasarc
1999-12-01
FTOOLS, a highly modular collection of utilities for processing and analyzing data in the FITS (Flexible Image Transport System) format, has been developed in support of the HEASARC (High Energy Astrophysics Research Archive Center) at NASA's Goddard Space Flight Center. The FTOOLS package contains many utility programs which perform modular tasks on any FITS image or table, as well as higher-level analysis programs designed specifically for data from current and past high energy astrophysics missions. The utility programs for FITS tables are especially rich and powerful, and provide functions for presentation of file contents, extraction of specific rows or columns, appending or merging tables, binning values in a column or selecting subsets of rows based on a boolean expression. Individual FTOOLS programs can easily be chained together in scripts to achieve more complex operations such as the generation and displaying of spectra or light curves. FTOOLS development began in 1991 and has produced the main set of data analysis software for the current ASCA and RXTE space missions and for other archival sets of X-ray and gamma-ray data. The FTOOLS software package is supported on most UNIX platforms and on Windows machines. The user interface is controlled by standard parameter files that are very similar to those used by IRAF. The package is self documenting through a stand alone help task called fhelp. Software is written in ANSI C and FORTRAN to provide portability across most computer systems. The data format dependencies between hardware platforms are isolated through the FITSIO library package.
Applications of magnetic resonance image segmentation in neurology
NASA Astrophysics Data System (ADS)
Heinonen, Tomi; Lahtinen, Antti J.; Dastidar, Prasun; Ryymin, Pertti; Laarne, Paeivi; Malmivuo, Jaakko; Laasonen, Erkki; Frey, Harry; Eskola, Hannu
1999-05-01
After the introduction of digital imagin devices in medicine computerized tissue recognition and classification have become important in research and clinical applications. Segmented data can be applied among numerous research fields including volumetric analysis of particular tissues and structures, construction of anatomical modes, 3D visualization, and multimodal visualization, hence making segmentation essential in modern image analysis. In this research project several PC based software were developed in order to segment medical images, to visualize raw and segmented images in 3D, and to produce EEG brain maps in which MR images and EEG signals were integrated. The software package was tested and validated in numerous clinical research projects in hospital environment.
Type of packaging affects the colour stability of vitamin E enriched beef.
Nassu, Renata T; Uttaro, Bethany; Aalhus, Jennifer L; Zawadski, Sophie; Juárez, Manuel; Dugan, Michael E R
2012-12-01
Colour stability is a very important parameter for meat retail display, as appearance of the product is the deciding factor for consumers at time of purchase. This study investigated the possibility of extending appearance shelf-life through the combined use of packaging method (overwrapping - OVER, modified atmosphere - MAP, vacuum skin packaging - VSP and a combination of modified atmosphere and vacuum skin packaging - MAPVSP) and antioxidants (vitamin E enriched beef). Retail attributes (appearance, lean colour, % surface discolouration), as well as colour space analysis of images for red, green and blue parameters were measured over 18days. MAPVSP provided the most desirable retail appearance during the first 4days of retail display, while VSP-HB had the best colour stability. Overall, packaging type was more influential than α-tocopherol levels on meat colour stability, although α-tocopherol levels (>4μgg(-1) meat) had a protective effect when using high oxygen packaging methods. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.
Analytic programming with FMRI data: a quick-start guide for statisticians using R.
Eloyan, Ani; Li, Shanshan; Muschelli, John; Pekar, Jim J; Mostofsky, Stewart H; Caffo, Brian S
2014-01-01
Functional magnetic resonance imaging (fMRI) is a thriving field that plays an important role in medical imaging analysis, biological and neuroscience research and practice. This manuscript gives a didactic introduction to the statistical analysis of fMRI data using the R project, along with the relevant R code. The goal is to give statisticians who would like to pursue research in this area a quick tutorial for programming with fMRI data. References of relevant packages and papers are provided for those interested in more advanced analysis.
Dispersed Fringe Sensing Analysis - DFSA
NASA Technical Reports Server (NTRS)
Sigrist, Norbert; Shi, Fang; Redding, David C.; Basinger, Scott A.; Ohara, Catherine M.; Seo, Byoung-Joon; Bikkannavar, Siddarayappa A.; Spechler, Joshua A.
2012-01-01
Dispersed Fringe Sensing (DFS) is a technique for measuring and phasing segmented telescope mirrors using a dispersed broadband light image. DFS is capable of breaking the monochromatic light ambiguity, measuring absolute piston errors between segments of large segmented primary mirrors to tens of nanometers accuracy over a range of 100 micrometers or more. The DFSA software tool analyzes DFS images to extract DFS encoded segment piston errors, which can be used to measure piston distances between primary mirror segments of ground and space telescopes. This information is necessary to control mirror segments to establish a smooth, continuous primary figure needed to achieve high optical quality. The DFSA tool is versatile, allowing precise piston measurements from a variety of different optical configurations. DFSA technology may be used for measuring wavefront pistons from sub-apertures defined by adjacent segments (such as Keck Telescope), or from separated sub-apertures used for testing large optical systems (such as sub-aperture wavefront testing for large primary mirrors using auto-collimating flats). An experimental demonstration of the coarse-phasing technology with verification of DFSA was performed at the Keck Telescope. DFSA includes image processing, wavelength and source spectral calibration, fringe extraction line determination, dispersed fringe analysis, and wavefront piston sign determination. The code is robust against internal optical system aberrations and against spectral variations of the source. In addition to the DFSA tool, the software package contains a simple but sophisticated MATLAB model to generate dispersed fringe images of optical system configurations in order to quickly estimate the coarse phasing performance given the optical and operational design requirements. Combining MATLAB (a high-level language and interactive environment developed by MathWorks), MACOS (JPL s software package for Modeling and Analysis for Controlled Optical Systems), and DFSA provides a unique optical development, modeling and analysis package to study current and future approaches to coarse phasing controlled segmented optical systems.
Reliability of CGA/LGA/HDI Package Board/Assembly (Final Report)
NASA Technical Reports Server (NTRS)
Ghaffaroam. Reza
2014-01-01
Package manufacturers are now offering commercial-off-the-shelf column grid array (COTS CGA) packaging technologies in high-reliability versions. Understanding the process and quality assurance (QA) indicators for reliability are important for low-risk insertion of these advanced electronics packages. The previous reports, released in January of 2012 and January of 2013, presented package test data, assembly information, and reliability evaluation by thermal cycling for CGA packages with 1752, 1517, 1509, and 1272 inputs/outputs (I/Os) and 1-mm pitch. It presented the thermal cycling (-55C either 100C or 125C) test results for up to 200 cycles. This report presents up to 500 thermal cycles with quality assurance and failure analysis evaluation represented by optical photomicrographs, 2D real time X-ray images, dye-and-pry photomicrographs, and optical/scanning electron Microscopy (SEM) cross-sectional images. The report also presents assembly challenge using reflowing by either vapor phase or rework station of CGA and land grid array (LGA) versions of three high I/O packages both ceramic and plastic configuration. A new test vehicle was designed having high density interconnect (HDI) printed circuit board (PCB) with microvia-in-pad to accommodate both LGA packages as well as a large number of fine pitch ball grid arrays (BGAs). The LGAs either were assembled onto HDI PCB as an LGA or were solder paste print and reflow first to form solder dome on pads before assembly. Both plastic BGAs with 1156 I/O and ceramic LGAs were assembled. It also presented the X-ray inspection results as well as failures due to 200 thermal cycles. Lessons learned on assembly of ceramic LGAs are also presented.
Low-cost digital image processing at the University of Oklahoma
NASA Technical Reports Server (NTRS)
Harrington, J. A., Jr.
1981-01-01
Computer assisted instruction in remote sensing at the University of Oklahoma involves two separate approaches and is dependent upon initial preprocessing of a LANDSAT computer compatible tape using software developed for an IBM 370/158 computer. In-house generated preprocessing algorithms permits students or researchers to select a subset of a LANDSAT scene for subsequent analysis using either general purpose statistical packages or color graphic image processing software developed for Apple II microcomputers. Procedures for preprocessing the data and image analysis using either of the two approaches for low-cost LANDSAT data processing are described.
NASA Astrophysics Data System (ADS)
Lisitsa, Y. V.; Yatskou, M. M.; Apanasovich, V. V.; Apanasovich, T. V.
2015-09-01
We have developed an algorithm for segmentation of cancer cell nuclei in three-channel luminescent images of microbiological specimens. The algorithm is based on using a correlation between fluorescence signals in the detection channels for object segmentation, which permits complete automation of the data analysis procedure. We have carried out a comparative analysis of the proposed method and conventional algorithms implemented in the CellProfiler and ImageJ software packages. Our algorithm has an object localization uncertainty which is 2-3 times smaller than for the conventional algorithms, with comparable segmentation accuracy.
CellProfiler Tracer: exploring and validating high-throughput, time-lapse microscopy image data.
Bray, Mark-Anthony; Carpenter, Anne E
2015-11-04
Time-lapse analysis of cellular images is an important and growing need in biology. Algorithms for cell tracking are widely available; what researchers have been missing is a single open-source software package to visualize standard tracking output (from software like CellProfiler) in a way that allows convenient assessment of track quality, especially for researchers tuning tracking parameters for high-content time-lapse experiments. This makes quality assessment and algorithm adjustment a substantial challenge, particularly when dealing with hundreds of time-lapse movies collected in a high-throughput manner. We present CellProfiler Tracer, a free and open-source tool that complements the object tracking functionality of the CellProfiler biological image analysis package. Tracer allows multi-parametric morphological data to be visualized on object tracks, providing visualizations that have already been validated within the scientific community for time-lapse experiments, and combining them with simple graph-based measures for highlighting possible tracking artifacts. CellProfiler Tracer is a useful, free tool for inspection and quality control of object tracking data, available from http://www.cellprofiler.org/tracer/.
Compact Video Microscope Imaging System Implemented in Colloid Studies
NASA Technical Reports Server (NTRS)
McDowell, Mark
2002-01-01
Long description Photographs showing fiber-optic light source, microscope and charge-coupled discharge (CCD) camera head connected to camera body, CCD camera body feeding data to image acquisition board in PC, and Cartesian robot controlled via PC board. The Compact Microscope Imaging System (CMIS) is a diagnostic tool with intelligent controls for use in space, industrial, medical, and security applications. CMIS can be used in situ with a minimum amount of user intervention. This system can scan, find areas of interest in, focus on, and acquire images automatically. Many multiple-cell experiments require microscopy for in situ observations; this is feasible only with compact microscope systems. CMIS is a miniature machine vision system that combines intelligent image processing with remote control. The software also has a user-friendly interface, which can be used independently of the hardware for further post-experiment analysis. CMIS has been successfully developed in the SML Laboratory at the NASA Glenn Research Center and adapted for use for colloid studies and is available for telescience experiments. The main innovations this year are an improved interface, optimized algorithms, and the ability to control conventional full-sized microscopes in addition to compact microscopes. The CMIS software-hardware interface is being integrated into our SML Analysis package, which will be a robust general-purpose image-processing package that can handle over 100 space and industrial applications.
Small PACS implementation using publicly available software
NASA Astrophysics Data System (ADS)
Passadore, Diego J.; Isoardi, Roberto A.; Gonzalez Nicolini, Federico J.; Ariza, P. P.; Novas, C. V.; Omati, S. A.
1998-07-01
Building cost effective PACS solutions is a main concern in developing countries. Hardware and software components are generally much more expensive than in developed countries and also more tightened financial constraints are the main reasons contributing to a slow rate of implementation of PACS. The extensive use of Internet for sharing resources and information has brought a broad number of freely available software packages to an ever-increasing number of users. In the field of medical imaging is possible to find image format conversion packages, DICOM compliant servers for all kinds of service classes, databases, web servers, image visualization, manipulation and analysis tools, etc. This paper describes a PACS implementation for review and storage built on freely available software. It currently integrates four diagnostic modalities (PET, CT, MR and NM), a Radiotherapy Treatment Planning workstation and several computers in a local area network, for image storage, database management and image review, processing and analysis. It also includes a web-based application that allows remote users to query the archive for studies from any workstation and to view the corresponding images and reports. We conclude that the advantage of using this approach is twofold. It allows a full understanding of all the issues involved in the implementation of a PACS and also contributes to keep costs down while enabling the development of a functional system for storage, distribution and review that can prove to be helpful for radiologists and referring physicians.
Results of a low power ice protection system test and a new method of imaging data analysis
NASA Technical Reports Server (NTRS)
Shin, Jaiwon; Bond, Thomas H.; Mesander, Geert A.
1992-01-01
Tests were conducted on a BF Goodrich De-Icing System's Pneumatic Impulse Ice Protection (PIIP) system in the NASA Lewis Icing Research Tunnel (IRT). Characterization studies were done on shed ice particle size by changing the input pressure and cycling time of the PIIP de-icer. The shed ice particle size was quantified using a newly developed image software package. The tests were conducted on a 1.83 m (6 ft) span, 0.53 m (221 in) chord NACA 0012 airfoil operated at a 4 degree angle of attack. The IRT test conditions were a -6.7 C (20 F) glaze ice, and a -20 C (-4 F) rime ice. The ice shedding events were recorded with a high speed video system. A detailed description of the image processing package and the results generated from this analytical tool are presented.
Reduce Fluid Experiment System: Flight data from the IML-1 Mission
NASA Technical Reports Server (NTRS)
Workman, Gary L.; Harper, Sabrina
1995-01-01
Processing and data reduction of holographic images from the International Microgravity Laboratory 1 (IML-1) presents some interesting challenges in determining the effects of microgravity on crystal growth processes. Use of several processing techniques, including the Computerized Holographic Image Processing System and the Software Development Package (SDP-151) will provide fundamental information for holographic and schlieren analysis of the space flight data.
Image dissector control and data system, part 1. [instrument packages and equipment specifications
NASA Technical Reports Server (NTRS)
1974-01-01
A general description of the image dissector control and data system is presented along with detailed design information, operating instructions, and maintenance and trouble-shooting procedures for the four instrumentation packages. The four instrumentation packages include a 90 inch telescope, a simplified telescope module for use on the 90 inch or other telescopes, a photographic plate scanner module which permits the scanning of astronomical photographic plates in the laboratory, and the lunar experiment package module.
NASA Astrophysics Data System (ADS)
Agarwal, Smriti; Singh, Dharmendra
2016-04-01
Millimeter wave (MMW) frequency has emerged as an efficient tool for different stand-off imaging applications. In this paper, we have dealt with a novel MMW imaging application, i.e., non-invasive packaged goods quality estimation for industrial quality monitoring applications. An active MMW imaging radar operating at 60 GHz has been ingeniously designed for concealed fault estimation. Ceramic tiles covered with commonly used packaging cardboard were used as concealed targets for undercover fault classification. A comparison of computer vision-based state-of-the-art feature extraction techniques, viz, discrete Fourier transform (DFT), wavelet transform (WT), principal component analysis (PCA), gray level co-occurrence texture (GLCM), and histogram of oriented gradient (HOG) has been done with respect to their efficient and differentiable feature vector generation capability for undercover target fault classification. An extensive number of experiments were performed with different ceramic tile fault configurations, viz., vertical crack, horizontal crack, random crack, diagonal crack along with the non-faulty tiles. Further, an independent algorithm validation was done demonstrating classification accuracy: 80, 86.67, 73.33, and 93.33 % for DFT, WT, PCA, GLCM, and HOG feature-based artificial neural network (ANN) classifier models, respectively. Classification results show good capability for HOG feature extraction technique towards non-destructive quality inspection with appreciably low false alarm as compared to other techniques. Thereby, a robust and optimal image feature-based neural network classification model has been proposed for non-invasive, automatic fault monitoring for a financially and commercially competent industrial growth.
The GONG Data Reduction and Analysis System. [solar oscillations
NASA Technical Reports Server (NTRS)
Pintar, James A.; Andersen, Bo Nyborg; Andersen, Edwin R.; Armet, David B.; Brown, Timothy M.; Hathaway, David H.; Hill, Frank; Jones, Harrison P.
1988-01-01
Each of the six GONG observing stations will produce three, 16-bit, 256X256 images of the Sun every 60 sec of sunlight. These data will be transferred from the observing sites to the GONG Data Management and Analysis Center (DMAC), in Tucson, on high-density tapes at a combined rate of over 1 gibabyte per day. The contemporaneous processing of these data will produce several standard data products and will require a sustained throughput in excess of 7 megaflops. Peak rates may exceed 50 megaflops. Archives will accumulate at the rate of approximately 1 terabyte per year, reaching nearly 3 terabytes in 3 yr of observing. Researchers will access the data products with a machine-independent GONG Reduction and Analysis Software Package (GRASP). Based on the Image Reduction and Analysis Facility, this package will include database facilities and helioseismic analysis tools. Users may access the data as visitors in Tucson, or may access DMAC remotely through networks, or may process subsets of the data at their local institutions using GRASP or other systems of their choice. Elements of the system will reach the prototype stage by the end of 1988. Full operation is expected in 1992 when data acquisition begins.
Visual Data Analysis for Satellites
NASA Technical Reports Server (NTRS)
Lau, Yee; Bhate, Sachin; Fitzpatrick, Patrick
2008-01-01
The Visual Data Analysis Package is a collection of programs and scripts that facilitate visual analysis of data available from NASA and NOAA satellites, as well as dropsonde, buoy, and conventional in-situ observations. The package features utilities for data extraction, data quality control, statistical analysis, and data visualization. The Hierarchical Data Format (HDF) satellite data extraction routines from NASA's Jet Propulsion Laboratory were customized for specific spatial coverage and file input/output. Statistical analysis includes the calculation of the relative error, the absolute error, and the root mean square error. Other capabilities include curve fitting through the data points to fill in missing data points between satellite passes or where clouds obscure satellite data. For data visualization, the software provides customizable Generic Mapping Tool (GMT) scripts to generate difference maps, scatter plots, line plots, vector plots, histograms, timeseries, and color fill images.
Fenko, Anna; de Vries, Roxan; van Rompay, Thomas
2018-01-01
This study investigates the relative impact of textual claims and visual metaphors displayed on the product’s package on consumers’ flavor experience and product evaluation. For consumers, strength is one of the most important sensory attributes of coffee. The 2 × 3 between-subjects experiment (N = 123) compared the effects of visual metaphor of strength (an image of a lion located either on top or on the bottom of the package of coffee beans) and the direct textual claim (“extra strong”) on consumers’ responses to coffee, including product expectation, flavor evaluation, strength perception and purchase intention. The results demonstrate that both the textual claim and the visual metaphor can be efficient in communicating the product attribute of strength. The presence of the image positively influenced consumers’ product expectations before tasting. The textual claim increased the perception of strength of coffee and the purchase intention of the product. The location of the image also played an important role in flavor perception and purchase intention. The image located on the bottom of the package increased the perceived strength of coffee and purchase intention of the product compared to the image on top of the package. This result could be interpreted from the perspective of the grounded cognition theory, which suggests that a picture in the lower part of the package would automatically activate the “strong is heavy” metaphor. As heavy objects are usually associated with a position on the ground, this would explain why perceiving a visually heavy package would lead to the experience of a strong coffee. Further research is needed to better understand the relationships between a metaphorical image and its spatial position in food packaging design. PMID:29459840
The IDL astronomy user's library
NASA Technical Reports Server (NTRS)
Landsman, W. B.
1992-01-01
IDL (Interactive Data Language) is a commercial programming, plotting, and image display language, which is widely used in astronomy. The IDL Astronomy User's Library is a central repository of over 400 astronomy-related IDL procedures accessible via anonymous FTP. The author will overview the use of IDL within the astronomical community and discuss recent enhancements at the IDL astronomy library. These enhancements include a fairly complete I/O package for FITS images and tables, an image deconvolution package and an image mosaic package, and access to IDL Open Windows/Motif widgets interface. The IDL Astronomy Library is funded by NASA through the Astrophysics Software and Research Aids Program.
High pressure single-crystal micro X-ray diffraction analysis with GSE_ADA/RSV software
NASA Astrophysics Data System (ADS)
Dera, Przemyslaw; Zhuravlev, Kirill; Prakapenka, Vitali; Rivers, Mark L.; Finkelstein, Gregory J.; Grubor-Urosevic, Ognjen; Tschauner, Oliver; Clark, Simon M.; Downs, Robert T.
2013-08-01
GSE_ADA/RSV is a free software package for custom analysis of single-crystal micro X-ray diffraction (SCμXRD) data, developed with particular emphasis on data from samples enclosed in diamond anvil cells and subject to high pressure conditions. The package has been in extensive use at the high pressure beamlines of Advanced Photon Source (APS), Argonne National Laboratory and Advanced Light Source (ALS), Lawrence Berkeley National Laboratory. The software is optimized for processing of wide-rotation images and includes a variety of peak intensity corrections and peak filtering features, which are custom-designed to make processing of high pressure SCμXRD easier and more reliable.
Student Development of Educational Software: Spin-Offs from Classroom Use of DIAS.
ERIC Educational Resources Information Center
Harrington, John A., Jr.; And Others
1988-01-01
Describes several college courses which encourage students to develop computer software programs in the areas of remote sensing and geographic information systems. A microcomputer-based tutorial package, the Digital Image Analysis System (DAIS), teaches the principles of digital processing. (LS)
CALIPSO: an interactive image analysis software package for desktop PACS workstations
NASA Astrophysics Data System (ADS)
Ratib, Osman M.; Huang, H. K.
1990-07-01
The purpose of this project is to develop a low cost workstation for quantitative analysis of multimodality images using a Macintosh II personal computer. In the current configuration the Macintosh operates as a stand alone workstation where images are imported either from a central PACS server through a standard Ethernet network or recorded through video digitizer board. The CALIPSO software developed contains a large variety ofbasic image display and manipulation tools. We focused our effort however on the design and implementation ofquantitative analysis methods that can be applied to images from different imaging modalities. Analysis modules currently implemented include geometric and densitometric volumes and ejection fraction calculation from radionuclide and cine-angiograms Fourier analysis ofcardiac wall motion vascular stenosis measurement color coded parametric display of regional flow distribution from dynamic coronary angiograms automatic analysis ofmyocardial distribution ofradiolabelled tracers from tomoscintigraphic images. Several of these analysis tools were selected because they use similar color coded andparametric display methods to communicate quantitative data extracted from the images. 1. Rationale and objectives of the project Developments of Picture Archiving and Communication Systems (PACS) in clinical environment allow physicians and radiologists to assess radiographic images directly through imaging workstations (''). This convenient access to the images is often limited by the number of workstations available due in part to their high cost. There is also an increasing need for quantitative analysis ofthe images. During thepast decade
A software tool for automatic classification and segmentation of 2D/3D medical images
NASA Astrophysics Data System (ADS)
Strzelecki, Michal; Szczypinski, Piotr; Materka, Andrzej; Klepaczko, Artur
2013-02-01
Modern medical diagnosis utilizes techniques of visualization of human internal organs (CT, MRI) or of its metabolism (PET). However, evaluation of acquired images made by human experts is usually subjective and qualitative only. Quantitative analysis of MR data, including tissue classification and segmentation, is necessary to perform e.g. attenuation compensation, motion detection, and correction of partial volume effect in PET images, acquired with PET/MR scanners. This article presents briefly a MaZda software package, which supports 2D and 3D medical image analysis aiming at quantification of image texture. MaZda implements procedures for evaluation, selection and extraction of highly discriminative texture attributes combined with various classification, visualization and segmentation tools. Examples of MaZda application in medical studies are also provided.
OpenComet: An automated tool for comet assay image analysis
Gyori, Benjamin M.; Venkatachalam, Gireedhar; Thiagarajan, P.S.; Hsu, David; Clement, Marie-Veronique
2014-01-01
Reactive species such as free radicals are constantly generated in vivo and DNA is the most important target of oxidative stress. Oxidative DNA damage is used as a predictive biomarker to monitor the risk of development of many diseases. The comet assay is widely used for measuring oxidative DNA damage at a single cell level. The analysis of comet assay output images, however, poses considerable challenges. Commercial software is costly and restrictive, while free software generally requires laborious manual tagging of cells. This paper presents OpenComet, an open-source software tool providing automated analysis of comet assay images. It uses a novel and robust method for finding comets based on geometric shape attributes and segmenting the comet heads through image intensity profile analysis. Due to automation, OpenComet is more accurate, less prone to human bias, and faster than manual analysis. A live analysis functionality also allows users to analyze images captured directly from a microscope. We have validated OpenComet on both alkaline and neutral comet assay images as well as sample images from existing software packages. Our results show that OpenComet achieves high accuracy with significantly reduced analysis time. PMID:24624335
OpenComet: an automated tool for comet assay image analysis.
Gyori, Benjamin M; Venkatachalam, Gireedhar; Thiagarajan, P S; Hsu, David; Clement, Marie-Veronique
2014-01-01
Reactive species such as free radicals are constantly generated in vivo and DNA is the most important target of oxidative stress. Oxidative DNA damage is used as a predictive biomarker to monitor the risk of development of many diseases. The comet assay is widely used for measuring oxidative DNA damage at a single cell level. The analysis of comet assay output images, however, poses considerable challenges. Commercial software is costly and restrictive, while free software generally requires laborious manual tagging of cells. This paper presents OpenComet, an open-source software tool providing automated analysis of comet assay images. It uses a novel and robust method for finding comets based on geometric shape attributes and segmenting the comet heads through image intensity profile analysis. Due to automation, OpenComet is more accurate, less prone to human bias, and faster than manual analysis. A live analysis functionality also allows users to analyze images captured directly from a microscope. We have validated OpenComet on both alkaline and neutral comet assay images as well as sample images from existing software packages. Our results show that OpenComet achieves high accuracy with significantly reduced analysis time.
Mossotti, Victor G.; Eldeeb, A. Raouf; Oscarson, Robert
1998-01-01
MORPH-I is a set of C-language computer programs for the IBM PC and compatible minicomputers. The programs in MORPH-I are used for the fractal analysis of scanning electron microscope and electron microprobe images of pore profiles exposed in cross-section. The program isolates and traces the cross-sectional profiles of exposed pores and computes the Richardson fractal dimension for each pore. Other programs in the set provide for image calibration, display, and statistical analysis of the computed dimensions for highly complex porous materials. Requirements: IBM PC or compatible; minimum 640 K RAM; mathcoprocessor; SVGA graphics board providing mode 103 display.
Applications of the Coastal Zone Color Scanner in oceanography
NASA Technical Reports Server (NTRS)
Mcclain, C. R.
1988-01-01
Research activity has continued to be focused on the applications of the Coastal Zone Color Scanner (CZCS) imagery in oceanography. A number of regional studies were completed including investigations of temporal and spatial variability of phytoplankton populations in the South Atlantic Bight, Northwest Spain, Weddell Sea, Bering Sea, Caribbean Sea and in tropical Atlantic Ocean. In addition to the regional studies, much work was dedicated to developing ancillary global scale meteorological and hydrographic data sets to complement the global CZCS processing products. To accomplish this, SEAPAK's image analysis capability was complemented with an interface to GEMPAK (Severe Storm Branch's meteorological analysis software package) for the analysis and graphical display of gridded data fields. Plans are being made to develop a similar interface to SEAPAK for hydrographic data using EPIC (a hydrographic data analysis package developed by NOAA/PMEL).
Evaluating Dense 3d Reconstruction Software Packages for Oblique Monitoring of Crop Canopy Surface
NASA Astrophysics Data System (ADS)
Brocks, S.; Bareth, G.
2016-06-01
Crop Surface Models (CSMs) are 2.5D raster surfaces representing absolute plant canopy height. Using multiple CMSs generated from data acquired at multiple time steps, a crop surface monitoring is enabled. This makes it possible to monitor crop growth over time and can be used for monitoring in-field crop growth variability which is useful in the context of high-throughput phenotyping. This study aims to evaluate several software packages for dense 3D reconstruction from multiple overlapping RGB images on field and plot-scale. A summer barley field experiment located at the Campus Klein-Altendorf of University of Bonn was observed by acquiring stereo images from an oblique angle using consumer-grade smart cameras. Two such cameras were mounted at an elevation of 10 m and acquired images for a period of two months during the growing period of 2014. The field experiment consisted of nine barley cultivars that were cultivated in multiple repetitions and nitrogen treatments. Manual plant height measurements were carried out at four dates during the observation period. The software packages Agisoft PhotoScan, VisualSfM with CMVS/PMVS2 and SURE are investigated. The point clouds are georeferenced through a set of ground control points. Where adequate results are reached, a statistical analysis is performed.
ExoSOFT: Exoplanet Simple Orbit Fitting Toolbox
NASA Astrophysics Data System (ADS)
Mede, Kyle; Brandt, Timothy D.
2017-08-01
ExoSOFT provides orbital analysis of exoplanets and binary star systems. It fits any combination of astrometric and radial velocity data, and offers four parameter space exploration techniques, including MCMC. It is packaged with an automated set of post-processing and plotting routines to summarize results, and is suitable for performing orbital analysis during surveys with new radial velocity and direct imaging instruments.
Surface roughness and packaging tightness affect calcium lactate crystallization on Cheddar cheese.
Rajbhandari, P; Kindstedt, P S
2014-01-01
Calcium lactate crystals that sometimes form on Cheddar cheese surfaces are a significant expense to manufacturers. Researchers have identified several postmanufacture conditions such as storage temperature and packaging tightness that contribute to crystal formation. Anecdotal reports suggest that physical characteristics at the cheese surface, such as roughness, cracks, and irregularities, may also affect crystallization. The aim of this study was to evaluate the combined effects of surface roughness and packaging tightness on crystal formation in smoked Cheddar cheese. Four 20-mm-thick cross-section slices were cut perpendicular to the long axis of a retail block (~300g) of smoked Cheddar cheese using a wire cutting device. One cut surface of each slice was lightly etched with a cheese grater to create a rough, grooved surface; the opposite cut surface was left undisturbed (smooth). The 4 slices were vacuum packaged at 1, 10, 50, and 90kPa (very tight, moderately tight, loose, very loose, respectively) and stored at 1°C. Digital images were taken at 1, 4, and 8 wk following the first appearance of crystals. The area occupied by crystals and number of discrete crystal regions (DCR) were quantified by image analysis. The experiment was conducted in triplicate. Effects of storage time, packaging tightness, surface roughness, and their interactions were evaluated by repeated-measures ANOVA. Surface roughness, packaging tightness, storage time, and their 2-way interactions significantly affected crystal area and DCR number. Extremely heavy crystallization occurred on both rough and smooth surfaces when slices were packaged loosely or very loosely and on rough surfaces with moderately tight packaging. In contrast, the combination of rough surface plus very tight packaging resulted in dramatic decreases in crystal area and DCR number. The combination of smooth surface plus very tight packaging virtually eliminated crystal formation, presumably by eliminating available sites for nucleation. Cut-and-wrap operations may significantly influence the crystallization behavior of Cheddar cheeses that are saturated with respect to calcium lactate and thus predisposed to form crystals. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Ultra high speed image processing techniques. [electronic packaging techniques
NASA Technical Reports Server (NTRS)
Anthony, T.; Hoeschele, D. F.; Connery, R.; Ehland, J.; Billings, J.
1981-01-01
Packaging techniques for ultra high speed image processing were developed. These techniques involve the development of a signal feedthrough technique through LSI/VLSI sapphire substrates. This allows the stacking of LSI/VLSI circuit substrates in a 3 dimensional package with greatly reduced length of interconnecting lines between the LSI/VLSI circuits. The reduced parasitic capacitances results in higher LSI/VLSI computational speeds at significantly reduced power consumption levels.
AceTree: a tool for visual analysis of Caenorhabditis elegans embryogenesis
Boyle, Thomas J; Bao, Zhirong; Murray, John I; Araya, Carlos L; Waterston, Robert H
2006-01-01
Background The invariant lineage of the nematode Caenorhabditis elegans has potential as a powerful tool for the description of mutant phenotypes and gene expression patterns. We previously described procedures for the imaging and automatic extraction of the cell lineage from C. elegans embryos. That method uses time-lapse confocal imaging of a strain expressing histone-GFP fusions and a software package, StarryNite, processes the thousands of images and produces output files that describe the location and lineage relationship of each nucleus at each time point. Results We have developed a companion software package, AceTree, which links the images and the annotations using tree representations of the lineage. This facilitates curation and editing of the lineage. AceTree also contains powerful visualization and interpretive tools, such as space filling models and tree-based expression patterning, that can be used to extract biological significance from the data. Conclusion By pairing a fast lineaging program written in C with a user interface program written in Java we have produced a powerful software suite for exploring embryonic development. PMID:16740163
AceTree: a tool for visual analysis of Caenorhabditis elegans embryogenesis.
Boyle, Thomas J; Bao, Zhirong; Murray, John I; Araya, Carlos L; Waterston, Robert H
2006-06-01
The invariant lineage of the nematode Caenorhabditis elegans has potential as a powerful tool for the description of mutant phenotypes and gene expression patterns. We previously described procedures for the imaging and automatic extraction of the cell lineage from C. elegans embryos. That method uses time-lapse confocal imaging of a strain expressing histone-GFP fusions and a software package, StarryNite, processes the thousands of images and produces output files that describe the location and lineage relationship of each nucleus at each time point. We have developed a companion software package, AceTree, which links the images and the annotations using tree representations of the lineage. This facilitates curation and editing of the lineage. AceTree also contains powerful visualization and interpretive tools, such as space filling models and tree-based expression patterning, that can be used to extract biological significance from the data. By pairing a fast lineaging program written in C with a user interface program written in Java we have produced a powerful software suite for exploring embryonic development.
Multi-Purpose Crew Vehicle Camera Asset Planning: Imagery Previsualization
NASA Technical Reports Server (NTRS)
Beaulieu, K.
2014-01-01
Using JSC-developed and other industry-standard off-the-shelf 3D modeling, animation, and rendering software packages, the Image Science Analysis Group (ISAG) supports Orion Project imagery planning efforts through dynamic 3D simulation and realistic previsualization of ground-, vehicle-, and air-based camera output.
Analyzing multimodality tomographic images and associated regions of interest with MIDAS
NASA Astrophysics Data System (ADS)
Tsui, Wai-Hon; Rusinek, Henry; Van Gelder, Peter; Lebedev, Sergey
2001-07-01
This paper outlines the design and features incorporated in a software package for analyzing multi-modality tomographic images. The package MIDAS has been evolving for the past 15 years and is in wide use by researchers at New York University School of Medicine and a number of collaborating research sites. It was written in the C language and runs on Sun workstations and Intel PCs under the Solaris operating system. A unique strength of the MIDAS package lies in its ability to generate, manipulate and analyze a practically unlimited number of regions of interest (ROIs). These regions are automatically saved in an efficient data structure and linked to associated images. A wide selection of set theoretical (e.g. union, xor, difference), geometrical (e.g. move, rotate) and morphological (grow, peel) operators can be applied to an arbitrary selection of ROIs. ROIs are constructed as a result of image segmentation algorithms incorporated in MIDAS; they also can be drawn interactively. These ROI editing operations can be applied in either 2D or 3D mode. ROI statistics generated by MIDAS include means, standard deviations, centroids and histograms. Other image manipulation tools incorporated in MIDAS are multimodality and within modality coregistration methods (including landmark matching, surface fitting and Woods' correlation methods) and image reformatting methods (using nearest-neighbor, tri-linear or sinc interpolation). Applications of MIDAS include: (1) neuroanatomy research: marking anatomical structures in one orientation, reformatting marks to another orientation; (2) tissue volume measurements: brain structures (PET, MRI, CT), lung nodules (low dose CT), breast density (MRI); (3) analysis of functional (SPECT, PET) experiments by overlaying corresponding structural scans; (4) longitudinal studies: regional measurement of atrophy.
Uncertainty in the use of MAMA software to measure particle morphological parameters from SEM images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwartz, Daniel S.; Tandon, Lav
The MAMA software package developed at LANL is designed to make morphological measurements on a wide variety of digital images of objects. At LANL, we have focused on using MAMA to measure scanning electron microscope (SEM) images of particles, as this is a critical part of our forensic analysis of interdicted radiologic materials. In order to successfully use MAMA to make such measurements, we must understand the level of uncertainty involved in the process, so that we can rigorously support our quantitative conclusions.
Low Cost Desktop Image Analysis Workstation With Enhanced Interactive User Interface
NASA Astrophysics Data System (ADS)
Ratib, Osman M.; Huang, H. K.
1989-05-01
A multimodality picture archiving and communication system (PACS) is in routine clinical use in the UCLA Radiology Department. Several types workstations are currently implemented for this PACS. Among them, the Apple Macintosh II personal computer was recently chosen to serve as a desktop workstation for display and analysis of radiological images. This personal computer was selected mainly because of its extremely friendly user-interface, its popularity among the academic and medical community and its low cost. In comparison to other microcomputer-based systems the Macintosh II offers the following advantages: the extreme standardization of its user interface, file system and networking, and the availability of a very large variety of commercial software packages. In the current configuration the Macintosh II operates as a stand-alone workstation where images are imported from a centralized PACS server through an Ethernet network using a standard TCP-IP protocol, and stored locally on magnetic disk. The use of high resolution screens (1024x768 pixels x 8bits) offer sufficient performance for image display and analysis. We focused our project on the design and implementation of a variety of image analysis algorithms ranging from automated structure and edge detection to sophisticated dynamic analysis of sequential images. Specific analysis programs were developed for ultrasound images, digitized angiograms, MRI and CT tomographic images and scintigraphic images.
PlantCV v2: Image analysis software for high-throughput plant phenotyping
Abbasi, Arash; Berry, Jeffrey C.; Callen, Steven T.; Chavez, Leonardo; Doust, Andrew N.; Feldman, Max J.; Gilbert, Kerrigan B.; Hodge, John G.; Hoyer, J. Steen; Lin, Andy; Liu, Suxing; Lizárraga, César; Lorence, Argelia; Miller, Michael; Platon, Eric; Tessman, Monica; Sax, Tony
2017-01-01
Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here we present the details and rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning. PMID:29209576
PlantCV v2: Image analysis software for high-throughput plant phenotyping.
Gehan, Malia A; Fahlgren, Noah; Abbasi, Arash; Berry, Jeffrey C; Callen, Steven T; Chavez, Leonardo; Doust, Andrew N; Feldman, Max J; Gilbert, Kerrigan B; Hodge, John G; Hoyer, J Steen; Lin, Andy; Liu, Suxing; Lizárraga, César; Lorence, Argelia; Miller, Michael; Platon, Eric; Tessman, Monica; Sax, Tony
2017-01-01
Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here we present the details and rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.
PlantCV v2: Image analysis software for high-throughput plant phenotyping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gehan, Malia A.; Fahlgren, Noah; Abbasi, Arash
Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here in this paper we present the details andmore » rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.« less
PlantCV v2: Image analysis software for high-throughput plant phenotyping
Gehan, Malia A.; Fahlgren, Noah; Abbasi, Arash; ...
2017-12-01
Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here in this paper we present the details andmore » rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.« less
Non Contacting Evaluation of Strains and Cracking Using Optical and Infrared Imaging Techniques
1988-08-22
Compatible Zenith Z-386 microcomputer with plotter II. 3-D Motion Measurinq System 1. Complete OPTOTRAK three dimensional digitizing system. System includes...acquisition unit - 16 single ended analog input channels 3. Data Analysis Package software (KINEPLOT) 4. Extra OPTOTRAK Camera (max 224 per system
Bonekamp, S; Ghosh, P; Crawford, S; Solga, S F; Horska, A; Brancati, F L; Diehl, A M; Smith, S; Clark, J M
2008-01-01
To examine five available software packages for the assessment of abdominal adipose tissue with magnetic resonance imaging, compare their features and assess the reliability of measurement results. Feature evaluation and test-retest reliability of softwares (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision) used in manual, semi-automated or automated segmentation of abdominal adipose tissue. A random sample of 15 obese adults with type 2 diabetes. Axial T1-weighted spin echo images centered at vertebral bodies of L2-L3 were acquired at 1.5 T. Five software packages were evaluated (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision), comparing manual, semi-automated and automated segmentation approaches. Images were segmented into cross-sectional area (CSA), and the areas of visceral (VAT) and subcutaneous adipose tissue (SAT). Ease of learning and use and the design of the graphical user interface (GUI) were rated. Intra-observer accuracy and agreement between the software packages were calculated using intra-class correlation. Intra-class correlation coefficient was used to obtain test-retest reliability. Three of the five evaluated programs offered a semi-automated technique to segment the images based on histogram values or a user-defined threshold. One software package allowed manual delineation only. One fully automated program demonstrated the drawbacks of uncritical automated processing. The semi-automated approaches reduced variability and measurement error, and improved reproducibility. There was no significant difference in the intra-observer agreement in SAT and CSA. The VAT measurements showed significantly lower test-retest reliability. There were some differences between the software packages in qualitative aspects, such as user friendliness. Four out of five packages provided essentially the same results with respect to the inter- and intra-rater reproducibility. Our results using SliceOmatic, Analyze or NIHImage were comparable and could be used interchangeably. Newly developed fully automated approaches should be compared to one of the examined software packages.
Bonekamp, S; Ghosh, P; Crawford, S; Solga, SF; Horska, A; Brancati, FL; Diehl, AM; Smith, S; Clark, JM
2009-01-01
Objective To examine five available software packages for the assessment of abdominal adipose tissue with magnetic resonance imaging, compare their features and assess the reliability of measurement results. Design Feature evaluation and test–retest reliability of softwares (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision) used in manual, semi-automated or automated segmentation of abdominal adipose tissue. Subjects A random sample of 15 obese adults with type 2 diabetes. Measurements Axial T1-weighted spin echo images centered at vertebral bodies of L2–L3 were acquired at 1.5 T. Five software packages were evaluated (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision), comparing manual, semi-automated and automated segmentation approaches. Images were segmented into cross-sectional area (CSA), and the areas of visceral (VAT) and subcutaneous adipose tissue (SAT). Ease of learning and use and the design of the graphical user interface (GUI) were rated. Intra-observer accuracy and agreement between the software packages were calculated using intra-class correlation. Intra-class correlation coefficient was used to obtain test–retest reliability. Results Three of the five evaluated programs offered a semi-automated technique to segment the images based on histogram values or a user-defined threshold. One software package allowed manual delineation only. One fully automated program demonstrated the drawbacks of uncritical automated processing. The semi-automated approaches reduced variability and measurement error, and improved reproducibility. There was no significant difference in the intra-observer agreement in SAT and CSA. The VAT measurements showed significantly lower test–retest reliability. There were some differences between the software packages in qualitative aspects, such as user friendliness. Conclusion Four out of five packages provided essentially the same results with respect to the inter- and intra-rater reproducibility. Our results using SliceOmatic, Analyze or NIHImage were comparable and could be used interchangeably. Newly developed fully automated approaches should be compared to one of the examined software packages. PMID:17700582
Volumetric neuroimage analysis extensions for the MIPAV software package.
Bazin, Pierre-Louis; Cuzzocreo, Jennifer L; Yassa, Michael A; Gandler, William; McAuliffe, Matthew J; Bassett, Susan S; Pham, Dzung L
2007-09-15
We describe a new collection of publicly available software tools for performing quantitative neuroimage analysis. The tools perform semi-automatic brain extraction, tissue classification, Talairach alignment, and atlas-based measurements within a user-friendly graphical environment. They are implemented as plug-ins for MIPAV, a freely available medical image processing software package from the National Institutes of Health. Because the plug-ins and MIPAV are implemented in Java, both can be utilized on nearly any operating system platform. In addition to the software plug-ins, we have also released a digital version of the Talairach atlas that can be used to perform regional volumetric analyses. Several studies are conducted applying the new tools to simulated and real neuroimaging data sets.
2010-06-01
scanners, readers, or imagers. These types of ADCS devices use two slightly different technologies. Laser scanners use a photodiode to measure the...structure of a ship, but the LCS utilizes modular mission packages that can be removed and replaced when the threat , environment, or mission changes...would need to support a wide array of business applications and users (Clarion, 2009). The DoD’s solution to this deficiency is called IUID. IUID is a
FTOOLS: A FITS Data Processing and Analysis Software Package
NASA Astrophysics Data System (ADS)
Blackburn, J. Kent; Greene, Emily A.; Pence, William
1993-05-01
FTOOLS, a highly modular collection of utilities for processing and analyzing data in the FITS (Flexible Image Transport System) format, has been developed in support of the HEASARC (High Energy Astrophysics Research Archive Center) at NASA's Goddard Space Flight Center. Each utility performs a single simple task such as presentation of file contents, extraction of specific rows or columns, appending or merging tables, binning values in a column or selecting subsets of rows based on a boolean expression. Individual utilities can easily be chained together in scripts to achieve more complex operations such as the generation and displaying of spectra or light curves. The collection of utilities provides both generic processing and analysis utilities and utilities common to high energy astrophysics data sets. The FTOOLS software package is designed to be both compatible with IRAF and completely stand alone in a UNIX or VMS environment. The user interface is controlled by standard IRAF parameter files. The package is self documenting through the IRAF help facility and a stand alone help task. Software is written in ANSI C and FORTRAN to provide portability across most computer systems. The data format dependencies between hardware platforms are isolated through the FITSIO library package.
From Cells to Virus Particles: Quantitative Methods to Monitor RNA Packaging
Ferrer, Mireia; Henriet, Simon; Chamontin, Célia; Lainé, Sébastien; Mougel, Marylène
2016-01-01
In cells, positive strand RNA viruses, such as Retroviridae, must selectively recognize their full-length RNA genome among abundant cellular RNAs to assemble and release particles. How viruses coordinate the intracellular trafficking of both RNA and protein components to the assembly sites of infectious particles at the cell surface remains a long-standing question. The mechanisms ensuring packaging of genomic RNA are essential for viral infectivity. Since RNA packaging impacts on several essential functions of retroviral replication such as RNA dimerization, translation and recombination events, there are many studies that require the determination of RNA packaging efficiency and/or RNA packaging ability. Studies of RNA encapsidation rely upon techniques for the identification and quantification of RNA species packaged by the virus. This review focuses on the different approaches available to monitor RNA packaging: Northern blot analysis, ribonuclease protection assay and quantitative reverse transcriptase-coupled polymerase chain reaction as well as the most recent RNA imaging and sequencing technologies. Advantages, disadvantages and limitations of these approaches will be discussed in order to help the investigator to choose the most appropriate technique. Although the review was written with the prototypic simple murine leukemia virus (MLV) and complex human immunodeficiency virus type 1 (HIV-1) in mind, the techniques were described in order to benefit to a larger community. PMID:27556480
ARCHANGEL: Galaxy Photometry System
NASA Astrophysics Data System (ADS)
Schombert, James
2011-07-01
ARCHANGEL is a Unix-based package for the surface photometry of galaxies. While oriented for large angular size systems (i.e. many pixels), its tools can be applied to any imaging data of any size. The package core contains routines to perform the following critical galaxy photometry functions: sky determination; frame cleaning; ellipse fitting; profile fitting; and total and isophotal magnitudes. The goal of the package is to provide an automated, assembly-line type of reduction system for galaxy photometry of space-based or ground-based imaging data. The procedures outlined in the documentation are flux independent, thus, these routines can be used for non-optical data as well as typical imaging datasets. ARCHANGEL has been tested on several current OS's (RedHat Linux, Ubuntu Linux, Solaris, Mac OS X). A tarball for installation is available at the download page. The main routines are Python and FORTRAN based, therefore, a current installation of Python and a FORTRAN compiler are required. The ARCHANGEL package also contains Python hooks to the PGPLOT package, an XML processor and network tools which automatically link to data archives (i.e. NED, HST, 2MASS, etc) to download images in a non-interactive manner.
ProFound: Source Extraction and Application to Modern Survey Data
NASA Astrophysics Data System (ADS)
Robotham, A. S. G.; Davies, L. J. M.; Driver, S. P.; Koushan, S.; Taranu, D. S.; Casura, S.; Liske, J.
2018-05-01
We introduce PROFOUND, a source finding and image analysis package. PROFOUND provides methods to detect sources in noisy images, generate segmentation maps identifying the pixels belonging to each source, and measure statistics like flux, size, and ellipticity. These inputs are key requirements of PROFIT, our recently released galaxy profiling package, where the design aim is that these two software packages will be used in unison to semi-automatically profile large samples of galaxies. The key novel feature introduced in PROFOUND is that all photometry is executed on dilated segmentation maps that fully contain the identifiable flux, rather than using more traditional circular or ellipse-based photometry. Also, to be less sensitive to pathological segmentation issues, the de-blending is made across saddle points in flux. We apply PROFOUND in a number of simulated and real-world cases, and demonstrate that it behaves reasonably given its stated design goals. In particular, it offers good initial parameter estimation for PROFIT, and also segmentation maps that follow the sometimes complex geometry of resolved sources, whilst capturing nearly all of the flux. A number of bulge-disc decomposition projects are already making use of the PROFOUND and PROFIT pipeline, and adoption is being encouraged by publicly releasing the software for the open source R data analysis platform under an LGPL-3 license on GitHub (github.com/asgr/ProFound).
[Graphic images on cigarette packages not effective].
Kok, Gerjo; Peters, Gjalt-Jorn Y; Ruiter, Robert A C
2013-01-01
The Dutch Government intends to make graphic images on cigarette packages mandatory. However, contrary to other policy measures to reduce smoking, health warnings do not work. There is no acceptable evidence in favour of graphic images and behaviour change theories suggest methods of change that improve skills, self-efficacy and social support. Thus, theory- and evidence-based policy should focus on prohibiting the tobacco industry from glamourizing packaging and make health communications on packages mandatory. As to the type of communications to be used, theory and evidence suggest that warning of the negative consequences of smoking is not an effective approach. Rather, targeting the most important determinants of the initiation of smoking and its successful cessation - such as skills, self-efficacy and subjective norm - along with the most effective behaviour change methods appears to be the most expedient strategy.
Advanced Connectivity Analysis (ACA): a Large Scale Functional Connectivity Data Mining Environment.
Chen, Rong; Nixon, Erika; Herskovits, Edward
2016-04-01
Using resting-state functional magnetic resonance imaging (rs-fMRI) to study functional connectivity is of great importance to understand normal development and function as well as a host of neurological and psychiatric disorders. Seed-based analysis is one of the most widely used rs-fMRI analysis methods. Here we describe a freely available large scale functional connectivity data mining software package called Advanced Connectivity Analysis (ACA). ACA enables large-scale seed-based analysis and brain-behavior analysis. It can seamlessly examine a large number of seed regions with minimal user input. ACA has a brain-behavior analysis component to delineate associations among imaging biomarkers and one or more behavioral variables. We demonstrate applications of ACA to rs-fMRI data sets from a study of autism.
Instructional image processing on a university mainframe: The Kansas system
NASA Technical Reports Server (NTRS)
Williams, T. H. L.; Siebert, J.; Gunn, C.
1981-01-01
An interactive digital image processing program package was developed that runs on the University of Kansas central computer, a Honeywell Level 66 multi-processor system. The module form of the package allows easy and rapid upgrades and extensions of the system and is used in remote sensing courses in the Department of Geography, in regional five-day short courses for academics and professionals, and also in remote sensing projects and research. The package comprises three self-contained modules of processing functions: Subimage extraction and rectification; image enhancement, preprocessing and data reduction; and classification. Its use in a typical course setting is described. Availability and costs are considered.
National Institute of Standards and Technology Data Gateway
NIST Scoring Package (PC database for purchase) The NIST Scoring Package (Special Database 1) is a reference implementation of the draft Standard Method for Evaluating the Performance of Systems Intended to Recognize Hand-printed Characters from Image Data Scanned from Forms.
Martinov, Dobrivoje; Popov, Veljko; Ignjatov, Zoran; Harris, Robert D
2013-04-01
Evolution of communication systems, especially internet-based technologies, has probably affected Radiology more than any other medical specialty. Tremendous increase in internet bandwidth has enabled a true revolution in image transmission and easy remote viewing of the static images and real-time video stream. Previous reports of real-time telesonography, such as the ones developed for emergency situations and humanitarian work, rely on high compressions of images utilized by remote sonologist to guide and supervise the unexperienced examiner. We believe that remote sonology could be also utilized in teleultrasound exam of infant hip. We tested feasibility of a low-cost teleultrasound system for infant hip and performed data analysis on the transmitted and original images. Transmission of data was accomplished with Remote Ultrasound (RU), a software package specifically designed for teleultrasound transmission through limited internet bandwidth. While image analysis of image pairs revealed statistically significant loss of information, panel evaluation failed to recognize any clinical difference between the original saved and transmitted still images.
Quantitative analysis of tympanic membrane perforation: a simple and reliable method.
Ibekwe, T S; Adeosun, A A; Nwaorgu, O G
2009-01-01
Accurate assessment of the features of tympanic membrane perforation, especially size, site, duration and aetiology, is important, as it enables optimum management. To describe a simple, cheap and effective method of quantitatively analysing tympanic membrane perforations. The system described comprises a video-otoscope (capable of generating still and video images of the tympanic membrane), adapted via a universal serial bus box to a computer screen, with images analysed using the Image J geometrical analysis software package. The reproducibility of results and their correlation with conventional otoscopic methods of estimation were tested statistically with the paired t-test and correlational tests, using the Statistical Package for the Social Sciences version 11 software. The following equation was generated: P/T x 100 per cent = percentage perforation, where P is the area (in pixels2) of the tympanic membrane perforation and T is the total area (in pixels2) for the entire tympanic membrane (including the perforation). Illustrations are shown. Comparison of blinded data on tympanic membrane perforation area obtained independently from assessments by two trained otologists, of comparative years of experience, using the video-otoscopy system described, showed similar findings, with strong correlations devoid of inter-observer error (p = 0.000, r = 1). Comparison with conventional otoscopic assessment also indicated significant correlation, comparing results for two trained otologists, but some inter-observer variation was present (p = 0.000, r = 0.896). Correlation between the two methods for each of the otologists was also highly significant (p = 0.000). A computer-adapted video-otoscope, with images analysed by Image J software, represents a cheap, reliable, technology-driven, clinical method of quantitative analysis of tympanic membrane perforations and injuries.
Visentin, Sindi; Bevilacqua, Greta; Giraudo, Chiara; Dengo, Caterina; Nalesso, Alessandro; Montisci, Massimo
2017-11-01
Death due to mechanical or chemical intoxication of heroin body packers, thanks to the continuous improvement in packaging techniques, are increasingly rare, and almost all the cases reported in the literature refer to drug swallowers. A case of fatal acute heroin intoxication in a body pusher with an unreported packaging technique is presented, and previous deaths due to heroin body packing are reviewed, taking into consideration imaging techniques performed, cause of death, toxicological analysis on biological and non-biological samples, as well as number, position and type of drug packages identified at the dissection of the body. The innovative packaging technique found in the present case, constituted by an external multilayer cellophane casing containing 16 smaller packages of hardened heroin powder, each one covered with cigarette paper and multiple layers of heat-sealed cellophane, was probably used to avoid both chemical complications of package rupture and to create a package with morphological and radiological features different from those reported by previous studies. Drug dealers, in fact, are continually looking for packaging methods that, besides being safer, minimize the risk of detection at the radiological examinations performed, thus increasing the number of false negative findings. The identification of new types of package is therefore important, in order to identify packages that do not have the typical radiological signs, both in order to protect the patient's health and to avoid the non-recognition of a drug carrier. Despite the presence of multilayer composition of both the smaller and the bigger external coverage, these new types of package did not guarantee the greater safety of the drug dealer. Copyright © 2017 Elsevier B.V. All rights reserved.
Zaknun, John J; Rajabi, Hossein; Piepsz, Amy; Roca, Isabel; Dondi, Maurizio
2011-01-01
Under the auspices of the International Atomic Energy Agency, a new-generation, platform-independent, and x86-compatible software package was developed for the analysis of scintigraphic renal dynamic imaging studies. It provides nuclear medicine professionals cost-free access to the most recent developments in the field. The software package is a step forward towards harmonization and standardization. Embedded functionalities render it a suitable tool for education, research, and for receiving distant expert's opinions. Another objective of this effort is to allow introducing clinically useful parameters of drainage, including normalized residual activity and outflow efficiency. Furthermore, it provides an effective teaching tool for young professionals who are being introduced to dynamic kidney studies by selected teaching case studies. The software facilitates a better understanding through practically approaching different variables and settings and their effect on the numerical results. An effort was made to introduce instruments of quality assurance at the various levels of the program's execution, including visual inspection and automatic detection and correction of patient's motion, automatic placement of regions of interest around the kidneys, cortical regions, and placement of reproducible background region on both primary dynamic and on postmicturition studies. The user can calculate the differential renal function through 2 independent methods, the integral or the Rutland-Patlak approaches. Standardized digital reports, storage and retrieval of regions of interest, and built-in database operations allow the generation and tracing of full image reports and of numerical outputs. The software package is undergoing quality assurance procedures to verify the accuracy and the interuser reproducibility with the final aim of launching the program for use by professionals and teaching institutions worldwide. Copyright © 2011 Elsevier Inc. All rights reserved.
MANTiS: a program for the analysis of X-ray spectromicroscopy data.
Lerotic, Mirna; Mak, Rachel; Wirick, Sue; Meirer, Florian; Jacobsen, Chris
2014-09-01
Spectromicroscopy combines spectral data with microscopy, where typical datasets consist of a stack of images taken across a range of energies over a microscopic region of the sample. Manual analysis of these complex datasets can be time-consuming, and can miss the important traits in the data. With this in mind we have developed MANTiS, an open-source tool developed in Python for spectromicroscopy data analysis. The backbone of the package involves principal component analysis and cluster analysis, classifying pixels according to spectral similarity. Our goal is to provide a data analysis tool which is comprehensive, yet intuitive and easy to use. MANTiS is designed to lead the user through the analysis using story boards that describe each step in detail so that both experienced users and beginners are able to analyze their own data independently. These capabilities are illustrated through analysis of hard X-ray imaging of iron in Roman ceramics, and soft X-ray imaging of a malaria-infected red blood cell.
Graphic warning labels on plain cigarette packs: will they make a difference to adolescents?
McCool, Judith; Webb, Lisa; Cameron, Linda D; Hoek, Janet
2012-04-01
Graphic warning labels and plain cigarette packaging are two initiatives developed to increase quit behaviour among smokers. Although a little is known about how adolescents interpret graphic warning labels, very few studies have examined how plain cigarette packaging would affect adolescents' perceptions of cigarette smoking and smoking behaviour. We explored how teens interpret and respond to graphic warning labels and the plain packaging of cigarettes, to assess the potential these strategies may offer in deterring smoking initiation. Twelve focus group interviews with a sample of 80 14-16 year old students from a diverse range of schools in Auckland, New Zealand were undertaken between June and August 2009. Textual analysis revealed that graphic warning labels may influence adolescents by reiterating a negative image of smokers. Graphic warning on a plain cigarette pack increased the attention paid to graphic warning labels and the overall perceptions of harm caused by cigarette smoking, and reduced the social appeal of cigarette smoking. This research offers evidence on how adolescents are appraising and interpreting graphic warning labels, and explores how dominant appraisals may affect the role graphic warning labels play in preventing smoking. Not only would plain cigarette packaging enhance the salience and impact of graphic warning labels, but it would potentially bolster the overall message that cigarette smoking is harmful. In the context of a comprehensive tobacco control programme, graphic warning labels on plain cigarette packaging present an explicit message about the risks (to health and image) associated with cigarette smoking. Copyright © 2012 Elsevier Ltd. All rights reserved.
PyMVPA: A python toolbox for multivariate pattern analysis of fMRI data.
Hanke, Michael; Halchenko, Yaroslav O; Sederberg, Per B; Hanson, Stephen José; Haxby, James V; Pollmann, Stefan
2009-01-01
Decoding patterns of neural activity onto cognitive states is one of the central goals of functional brain imaging. Standard univariate fMRI analysis methods, which correlate cognitive and perceptual function with the blood oxygenation-level dependent (BOLD) signal, have proven successful in identifying anatomical regions based on signal increases during cognitive and perceptual tasks. Recently, researchers have begun to explore new multivariate techniques that have proven to be more flexible, more reliable, and more sensitive than standard univariate analysis. Drawing on the field of statistical learning theory, these new classifier-based analysis techniques possess explanatory power that could provide new insights into the functional properties of the brain. However, unlike the wealth of software packages for univariate analyses, there are few packages that facilitate multivariate pattern classification analyses of fMRI data. Here we introduce a Python-based, cross-platform, and open-source software toolbox, called PyMVPA, for the application of classifier-based analysis techniques to fMRI datasets. PyMVPA makes use of Python's ability to access libraries written in a large variety of programming languages and computing environments to interface with the wealth of existing machine learning packages. We present the framework in this paper and provide illustrative examples on its usage, features, and programmability.
PyMVPA: A Python toolbox for multivariate pattern analysis of fMRI data
Hanke, Michael; Halchenko, Yaroslav O.; Sederberg, Per B.; Hanson, Stephen José; Haxby, James V.; Pollmann, Stefan
2009-01-01
Decoding patterns of neural activity onto cognitive states is one of the central goals of functional brain imaging. Standard univariate fMRI analysis methods, which correlate cognitive and perceptual function with the blood oxygenation-level dependent (BOLD) signal, have proven successful in identifying anatomical regions based on signal increases during cognitive and perceptual tasks. Recently, researchers have begun to explore new multivariate techniques that have proven to be more flexible, more reliable, and more sensitive than standard univariate analysis. Drawing on the field of statistical learning theory, these new classifier-based analysis techniques possess explanatory power that could provide new insights into the functional properties of the brain. However, unlike the wealth of software packages for univariate analyses, there are few packages that facilitate multivariate pattern classification analyses of fMRI data. Here we introduce a Python-based, cross-platform, and open-source software toolbox, called PyMVPA, for the application of classifier-based analysis techniques to fMRI datasets. PyMVPA makes use of Python's ability to access libraries written in a large variety of programming languages and computing environments to interface with the wealth of existing machine-learning packages. We present the framework in this paper and provide illustrative examples on its usage, features, and programmability. PMID:19184561
PANDA: a pipeline toolbox for analyzing brain diffusion images.
Cui, Zaixu; Zhong, Suyu; Xu, Pengfei; He, Yong; Gong, Gaolang
2013-01-01
Diffusion magnetic resonance imaging (dMRI) is widely used in both scientific research and clinical practice in in-vivo studies of the human brain. While a number of post-processing packages have been developed, fully automated processing of dMRI datasets remains challenging. Here, we developed a MATLAB toolbox named "Pipeline for Analyzing braiN Diffusion imAges" (PANDA) for fully automated processing of brain diffusion images. The processing modules of a few established packages, including FMRIB Software Library (FSL), Pipeline System for Octave and Matlab (PSOM), Diffusion Toolkit and MRIcron, were employed in PANDA. Using any number of raw dMRI datasets from different subjects, in either DICOM or NIfTI format, PANDA can automatically perform a series of steps to process DICOM/NIfTI to diffusion metrics [e.g., fractional anisotropy (FA) and mean diffusivity (MD)] that are ready for statistical analysis at the voxel-level, the atlas-level and the Tract-Based Spatial Statistics (TBSS)-level and can finish the construction of anatomical brain networks for all subjects. In particular, PANDA can process different subjects in parallel, using multiple cores either in a single computer or in a distributed computing environment, thus greatly reducing the time cost when dealing with a large number of datasets. In addition, PANDA has a friendly graphical user interface (GUI), allowing the user to be interactive and to adjust the input/output settings, as well as the processing parameters. As an open-source package, PANDA is freely available at http://www.nitrc.org/projects/panda/. This novel toolbox is expected to substantially simplify the image processing of dMRI datasets and facilitate human structural connectome studies.
NASA Technical Reports Server (NTRS)
Ballester, P.
1992-01-01
MIDAS (Munich Image Data Analysis System) is the image processing system developed at ESO for astronomical data reduction. MIDAS is used for off-line data reduction at ESO and many astronomical institutes all over Europe. In addition to a set of general commands, enabling to process and analyze images, catalogs, graphics and tables, MIDAS includes specialized packages dedicated to astronomical applications or to specific ESO instruments. Several graphical interfaces are available in the MIDAS environment: XHelp provides an interactive help facility, and XLong and XEchelle enable data reduction of long-slip and echelle spectra. GUI builders facilitate the development of interfaces. All ESO interfaces comply to the ESO User Interfaces Common Conventions which secures an identical look and feel for telescope operations, data analysis, and archives.
Eloi, Juliana Cristina; Epifanio, Matias; de Gonçalves, Marília Maia; Pellicioli, Augusto; Vieira, Patricia Froelich Giora; Dias, Henrique Bregolin; Bruscato, Neide; Soder, Ricardo Bernardi; Santana, João Carlos Batista; Mouzaki, Marialena; Baldisserotto, Matteo
2017-01-01
Computed tomography, which uses ionizing radiation and expensive software packages for analysis of scans, can be used to quantify abdominal fat. The objective of this study is to measure abdominal fat with 3T MRI using free software for image analysis and to correlate these findings with anthropometric and laboratory parameters in adolescents. This prospective observational study included 24 overweight/obese and 33 healthy adolescents (mean age 16.55 years). All participants underwent abdominal MRI exams. Visceral and subcutaneous fat area and percentage were correlated with anthropometric parameters, lipid profile, glucose metabolism, and insulin resistance. Student's t test and Mann-Whitney's test was applied. Pearson's chi-square test was used to compare proportions. To determine associations Pearson's linear correlation or Spearman's correlation were used. In both groups, waist circumference (WC) was associated with visceral fat area (P = 0.001 and P = 0.01 respectively), and triglycerides were associated with fat percentage (P = 0.046 and P = 0.071 respectively). In obese individuals, total cholesterol/HDL ratio was associated with visceral fat area (P = 0.03) and percentage (P = 0.09), and insulin and HOMA-IR were associated with visceral fat area (P = 0.001) and percentage (P = 0.005). 3T MRI can provide reliable and good quality images for quantification of visceral and subcutaneous fat by using a free software package. The results demonstrate that WC is a good predictor of visceral fat in obese adolescents and visceral fat area is associated with total cholesterol/HDL ratio, insulin and HOMA-IR.
Piccinini, Filippo; Balassa, Tamas; Szkalisity, Abel; Molnar, Csaba; Paavolainen, Lassi; Kujala, Kaisa; Buzas, Krisztina; Sarazova, Marie; Pietiainen, Vilja; Kutay, Ulrike; Smith, Kevin; Horvath, Peter
2017-06-28
High-content, imaging-based screens now routinely generate data on a scale that precludes manual verification and interrogation. Software applying machine learning has become an essential tool to automate analysis, but these methods require annotated examples to learn from. Efficiently exploring large datasets to find relevant examples remains a challenging bottleneck. Here, we present Advanced Cell Classifier (ACC), a graphical software package for phenotypic analysis that addresses these difficulties. ACC applies machine-learning and image-analysis methods to high-content data generated by large-scale, cell-based experiments. It features methods to mine microscopic image data, discover new phenotypes, and improve recognition performance. We demonstrate that these features substantially expedite the training process, successfully uncover rare phenotypes, and improve the accuracy of the analysis. ACC is extensively documented, designed to be user-friendly for researchers without machine-learning expertise, and distributed as a free open-source tool at www.cellclassifier.org. Copyright © 2017 Elsevier Inc. All rights reserved.
Reliability of Semiconductor Laser Packaging in Space Applications
NASA Technical Reports Server (NTRS)
Gontijo, Ivair; Qiu, Yueming; Shapiro, Andrew A.
2008-01-01
A typical set up used to perform lifetime tests of packaged, fiber pigtailed semiconductor lasers is described, as well as tests performed on a set of four pump lasers. It was found that two lasers failed after 3200, and 6100 hours under device specified bias conditions at elevated temperatures. Failure analysis of the lasers indicates imperfections and carbon contamination of the laser metallization, possibly from improperly cleaned photo resist. SEM imaging of the front facet of one of the lasers, although of poor quality due to the optical fiber charging effects, shows evidence of catastrophic damage at the facet. More stringent manufacturing controls with 100% visual inspection of laser chips are needed to prevent imperfect lasers from proceeding to packaging and ending up in space applications, where failure can result in the loss of a space flight mission.
Macy, Jonathan T; Chassin, Laurie; Presson, Clark C; Yeung, Ellen
2016-01-01
To test the effect of exposure to the US Food and Drug Administration's proposed graphic images with text warning statements for cigarette packages on implicit and explicit attitudes towards smoking. A two-session web-based study was conducted with 2192 young adults 18-25-years-old. During session one, demographics, smoking behaviour, and baseline implicit and explicit attitudes were assessed. Session two, completed on average 18 days later, contained random assignment to viewing one of three sets of cigarette packages, graphic images with text warnings, text warnings only, or current US Surgeon General's text warnings. Participants then completed post-exposure measures of implicit and explicit attitudes. ANCOVAs tested the effect of condition on the outcomes, controlling for baseline attitudes. Smokers who viewed packages with graphic images plus text warnings demonstrated more negative implicit attitudes compared to smokers in the other conditions (p = .004). For the entire sample, explicit attitudes were more negative for those who viewed graphic images plus text warnings compared to those who viewed current US Surgeon General's text warnings (p = .014), but there was no difference compared to those who viewed text-only warnings. Graphic health warnings on cigarette packages can influence young adult smokers' implicit attitudes towards smoking.
NASA Astrophysics Data System (ADS)
Schwartz, Richard A.; Zarro, D.; Csillaghy, A.; Dennis, B.; Tolbert, A. K.; Etesi, L.
2009-05-01
We report on our activities to integrate VSO search and retrieval capabilities into standard data access, display, and analysis tools. In addition to its standard Web-based search form, the VSO provides an Interactive Data Language (IDL) client (vso_search) that is available through the Solar Software (SSW) package. We have incorporated this client into an IDL-widget interface program (show_synop) that allows for more simplified searching and downloading of VSO datasets directly into a user's IDL data analysis environment. In particular, we have provided the capability to read VSO datasets into a general purpose IDL package (plotman) that can display different datatypes (lightcurves, images, and spectra) and perform basic data operations such as zooming, image overlays, solar rotation, etc. Currently, the show_synop tool supports access to ground-based and space-based (SOHO, STEREO, and Hinode) observations, and has the capability to include new datasets as they become available. A user encounters two major hurdles when using the VSO: (1) Instrument-specific software (such as level-0 file readers and data-prepping procedures) may not be available in the user's local SSW distribution. (2) Recent calibration files (such as flat-fields) are not automatically distributed with the analysis software. To address these issues, we have developed a dedicated server (prepserver) that incorporates all the latest instrument-specific software libraries and calibration files. The prepserver uses an IDL-Java bridge to read and implement data processing requests from a client and return a processed data file that can be readily displayed with the show_synop/plotman package. The advantage of the prepserver is that the user is only required to install the general branch (gen) of the SSW tree, and is freed from the more onerous task of installing instrument-specific libraries and calibration files. We will demonstrate how the prepserver can be used to read, process, and overlay SOHO/EIT, TRACE, SECCHI/EUVI, and RHESSI images.
NASA Astrophysics Data System (ADS)
Lawson, P.; Stamnes, K.; Stamnes, J.; Zmarzly, P.; O'Connor, D.; Koskulics, J.; Hamre, B.
2008-12-01
A tethered balloon system specifically designed to collect microphysical data in mixed-phase clouds was deployed in Arctic stratus clouds during May 2008 near Ny-Alesund, Svalbard, at 79 degrees North Latitude. This is the first time a tethered balloon system with a cloud particle imager (CPI) that records high-resolution digital images of cloud drops and ice particles has been operated in cloud. The custom tether supplies electrical power to the instrument package, which in addition to the CPI houses a 4-pi short-wavelength radiometer and a met package that measures temperature, humidity, pressure, GPS position, wind speed and direction. The instrument package was profiled vertically through cloud up to altitudes of 1.6 km. Since power was supplied to the instrument package from the ground, it was possible to keep the balloon package aloft for extended periods of time, up to 9 hours at Ny- Ålesund, which was limited only by crew fatigue. CPI images of cloud drops and the sizes, shapes and degree of riming of ice particles are shown throughout vertical profiles of Arctic stratus clouds. The images show large regions of mixed-phase cloud from -8 to -2 C. The predominant ice crystal habits in these regions are needles and aggregates of needles. The amount of ice in the mixed-phase clouds varied considerably and did not appear to be a function of temperature. On some occasions, ice was observed near cloud base at -2 C with supercooled cloud above to - 8 C that was devoid of ice. Measurements of shortwave radiation are also presented. Correlations between particle distributions and radiative measurements will be analyzed to determine the effect of these Arctic stratus clouds on radiative forcing.
Realistic Simulations of Coronagraphic Observations with WFIRST
NASA Astrophysics Data System (ADS)
Rizzo, Maxime; Zimmerman, Neil; Roberge, Aki; Lincowski, Andrew; Arney, Giada; Stark, Chris; Jansen, Tiffany; Turnbull, Margaret; WFIRST Science Investigation Team (Turnbull)
2018-01-01
We present a framework to simulate observing scenarios with the WFIRST Coronagraphic Instrument (CGI). The Coronagraph and Rapid Imaging Spectrograph in Python (crispy) is an open-source package that can be used to create CGI data products for analysis and development of post-processing routines. The software convolves time-varying coronagraphic PSFs with realistic astrophysical scenes which contain a planetary architecture, a consistent dust structure, and a background field composed of stars and galaxies. The focal plane can be read out by a WFIRST electron-multiplying CCD model directly, or passed through a WFIRST integral field spectrograph model first. Several elementary post-processing routines are provided as part of the package.
Characterization of fiber diameter using image analysis
NASA Astrophysics Data System (ADS)
Baheti, S.; Tunak, M.
2017-10-01
Due to high surface area and porosity, the applications of nanofibers have increased in recent years. In the production process, determination of average fiber diameter and fiber orientation is crucial for quality assessment. The objective of present study was to compare the relative performance of different methods discussed in literature for estimation of fiber diameter. In this work, the existing automated fiber diameter analysis software packages available in literature were developed and validated based on simulated images of known fiber diameter. Finally, all methods were compared for their reliable and accurate estimation of fiber diameter in electro spun nanofiber membranes based on obtained mean and standard deviation.
Practical Approach for Hyperspectral Image Processing in Python
NASA Astrophysics Data System (ADS)
Annala, L.; Eskelinen, M. A.; Hämäläinen, J.; Riihinen, A.; Pölönen, I.
2018-04-01
Python is a very popular programming language among data scientists around the world. Python can also be used in hyperspectral data analysis. There are some toolboxes designed for spectral imaging, such as Spectral Python and HyperSpy, but there is a need for analysis pipeline, which is easy to use and agile for different solutions. We propose a Python pipeline which is built on packages xarray, Holoviews and scikit-learn. We have developed some of own tools, MaskAccessor, VisualisorAccessor and a spectral index library. They also fulfill our goal of easy and agile data processing. In this paper we will present our processing pipeline and demonstrate it in practice.
3-D interactive visualisation tools for Hi spectral line imaging
NASA Astrophysics Data System (ADS)
van der Hulst, J. M.; Punzo, D.; Roerdink, J. B. T. M.
2017-06-01
Upcoming HI surveys will deliver such large datasets that automated processing using the full 3-D information to find and characterize HI objects is unavoidable. Full 3-D visualization is an essential tool for enabling qualitative and quantitative inspection and analysis of the 3-D data, which is often complex in nature. Here we present SlicerAstro, an open-source extension of 3DSlicer, a multi-platform open source software package for visualization and medical image processing, which we developed for the inspection and analysis of HI spectral line data. We describe its initial capabilities, including 3-D filtering, 3-D selection and comparative modelling.
The integration of a LANDSAT analysis capability with a geographic information system
NASA Technical Reports Server (NTRS)
Nordstrand, E. A.
1981-01-01
The integration of LANDSAT data was achieved through the development of a flexible, compatible analysis tool and using an existing data base to select the usable data from a LANDSAT analysis. The software package allows manipulation of grid cell data plus the flexibility to allow the user to include FORTRAN statements for special functions. Using this combination of capabilities the user can classify a LANDSAT image and then selectivity merge the results with other data that may exist for the study area.
Design of FPGA ICA for hyperspectral imaging processing
NASA Astrophysics Data System (ADS)
Nordin, Anis; Hsu, Charles C.; Szu, Harold H.
2001-03-01
The remote sensing problem which uses hyperspectral imaging can be transformed into a blind source separation problem. Using this model, hyperspectral imagery can be de-mixed into sub-pixel spectra which indicate the different material present in the pixel. This can be further used to deduce areas which contain forest, water or biomass, without even knowing the sources which constitute the image. This form of remote sensing allows previously blurred images to show the specific terrain involved in that region. The blind source separation problem can be implemented using an Independent Component Analysis algorithm. The ICA Algorithm has previously been successfully implemented using software packages such as MATLAB, which has a downloadable version of FastICA. The challenge now lies in implementing it in a form of hardware, or firmware in order to improve its computational speed. Hardware implementation also solves insufficient memory problem encountered by software packages like MATLAB when employing ICA for high resolution images and a large number of channels. Here, a pipelined solution of the firmware, realized using FPGAs are drawn out and simulated using C. Since C code can be translated into HDLs or be used directly on the FPGAs, it can be used to simulate its actual implementation in hardware. The simulated results of the program is presented here, where seven channels are used to model the 200 different channels involved in hyperspectral imaging.
Processing, mosaicking and management of the Monterey Bay digital sidescan-sonar images
Chavez, P.S.; Isbrecht, J.; Galanis, P.; Gabel, G.L.; Sides, S.C.; Soltesz, D.L.; Ross, Stephanie L.; Velasco, M.G.
2002-01-01
Sidescan-sonar imaging systems with digital capabilities have now been available for approximately 20 years. In this paper we present several of the various digital image processing techniques developed by the U.S. Geological Survey (USGS) and used to apply intensity/radiometric and geometric corrections, as well as enhance and digitally mosaic, sidescan-sonar images of the Monterey Bay region. New software run by a WWW server was designed and implemented to allow very large image data sets, such as the digital mosaic, to be easily viewed interactively, including the ability to roam throughout the digital mosaic at the web site in either compressed or full 1-m resolution. The processing is separated into the two different stages: preprocessing and information extraction. In the preprocessing stage, sensor-specific algorithms are applied to correct for both geometric and intensity/radiometric distortions introduced by the sensor. This is followed by digital mosaicking of the track-line strips into quadrangle format which can be used as input to either visual or digital image analysis and interpretation. An automatic seam removal procedure was used in combination with an interactive digital feathering/stenciling procedure to help minimize tone or seam matching problems between image strips from adjacent track-lines. The sidescan-sonar image processing package is part of the USGS Mini Image Processing System (MIPS) and has been designed to process data collected by any 'generic' digital sidescan-sonar imaging system. The USGS MIPS software, developed over the last 20 years as a public domain package, is available on the WWW at: http://terraweb.wr.usgs.gov/trs/software.html.
TANGO: a generic tool for high-throughput 3D image analysis for studying nuclear organization.
Ollion, Jean; Cochennec, Julien; Loll, François; Escudé, Christophe; Boudier, Thomas
2013-07-15
The cell nucleus is a highly organized cellular organelle that contains the genetic material. The study of nuclear architecture has become an important field of cellular biology. Extracting quantitative data from 3D fluorescence imaging helps understand the functions of different nuclear compartments. However, such approaches are limited by the requirement for processing and analyzing large sets of images. Here, we describe Tools for Analysis of Nuclear Genome Organization (TANGO), an image analysis tool dedicated to the study of nuclear architecture. TANGO is a coherent framework allowing biologists to perform the complete analysis process of 3D fluorescence images by combining two environments: ImageJ (http://imagej.nih.gov/ij/) for image processing and quantitative analysis and R (http://cran.r-project.org) for statistical processing of measurement results. It includes an intuitive user interface providing the means to precisely build a segmentation procedure and set-up analyses, without possessing programming skills. TANGO is a versatile tool able to process large sets of images, allowing quantitative study of nuclear organization. TANGO is composed of two programs: (i) an ImageJ plug-in and (ii) a package (rtango) for R. They are both free and open source, available (http://biophysique.mnhn.fr/tango) for Linux, Microsoft Windows and Macintosh OSX. Distribution is under the GPL v.2 licence. thomas.boudier@snv.jussieu.fr Supplementary data are available at Bioinformatics online.
An automatic analyzer of solid state nuclear track detectors using an optic RAM as image sensor
NASA Astrophysics Data System (ADS)
Staderini, Enrico Maria; Castellano, Alfredo
1986-02-01
An optic RAM is a conventional digital random access read/write dynamic memory device featuring a quartz windowed package and memory cells regularly ordered on the chip. Such a device is used as an image sensor because each cell retains data stored in it for a time depending on the intensity of the light incident on the cell itself. The authors have developed a system which uses an optic RAM to acquire and digitize images from electrochemically etched CR39 solid state nuclear track detectors (SSNTD) in the track count rate up to 5000 cm -2. On the digital image so obtained, a microprocessor, with appropriate software, performs image analysis, filtering, tracks counting and evaluation.
Radiology and Enterprise Medical Imaging Extensions (REMIX).
Erdal, Barbaros S; Prevedello, Luciano M; Qian, Songyue; Demirer, Mutlu; Little, Kevin; Ryu, John; O'Donnell, Thomas; White, Richard D
2018-02-01
Radiology and Enterprise Medical Imaging Extensions (REMIX) is a platform originally designed to both support the medical imaging-driven clinical and clinical research operational needs of Department of Radiology of The Ohio State University Wexner Medical Center. REMIX accommodates the storage and handling of "big imaging data," as needed for large multi-disciplinary cancer-focused programs. The evolving REMIX platform contains an array of integrated tools/software packages for the following: (1) server and storage management; (2) image reconstruction; (3) digital pathology; (4) de-identification; (5) business intelligence; (6) texture analysis; and (7) artificial intelligence. These capabilities, along with documentation and guidance, explaining how to interact with a commercial system (e.g., PACS, EHR, commercial database) that currently exists in clinical environments, are to be made freely available.
PIRATE: pediatric imaging response assessment and targeting environment
NASA Astrophysics Data System (ADS)
Glenn, Russell; Zhang, Yong; Krasin, Matthew; Hua, Chiaho
2010-02-01
By combining the strengths of various imaging modalities, the multimodality imaging approach has potential to improve tumor staging, delineation of tumor boundaries, chemo-radiotherapy regime design, and treatment response assessment in cancer management. To address the urgent needs for efficient tools to analyze large-scale clinical trial data, we have developed an integrated multimodality, functional and anatomical imaging analysis software package for target definition and therapy response assessment in pediatric radiotherapy (RT) patients. Our software provides quantitative tools for automated image segmentation, region-of-interest (ROI) histogram analysis, spatial volume-of-interest (VOI) analysis, and voxel-wise correlation across modalities. To demonstrate the clinical applicability of this software, histogram analyses were performed on baseline and follow-up 18F-fluorodeoxyglucose (18F-FDG) PET images of nine patients with rhabdomyosarcoma enrolled in an institutional clinical trial at St. Jude Children's Research Hospital. In addition, we combined 18F-FDG PET, dynamic-contrast-enhanced (DCE) MR, and anatomical MR data to visualize the heterogeneity in tumor pathophysiology with the ultimate goal of adaptive targeting of regions with high tumor burden. Our software is able to simultaneously analyze multimodality images across multiple time points, which could greatly speed up the analysis of large-scale clinical trial data and validation of potential imaging biomarkers.
PhAst: A Flexible IDL Astronomical Image Viewer
NASA Astrophysics Data System (ADS)
Rehnberg, Morgan; Crawford, R.; Trueblood, M.; Mighell, K.
2012-01-01
We present near-Earth asteroid data analyzed with PhAst, a new IDL astronomical image viewer based on the existing application ATV. PhAst opens, displays, and analyzes an arbitrary number of FITS images. Analysis packages include image calibration, photometry, and astrometry (provided through an interface with SExtractor, SCAMP, and missFITS). PhAst has been designed to generate reports for Minor Planet Center reporting. PhAst is cross platform (Linux/Mac OSX/Windows for image viewing and Linux/Mac OSX for image analysis) and can be downloaded from the following website at NOAO: http://www.noao.edu/staff/mighell/phast/. Rehnberg was supported by the NOAO/KPNO Research Experiences for Undergraduates (REU) Program which is funded by the National Science Foundation Research Experiences for Undergraduates Program and the Department of Defense ASSURE program through Scientific Program Order No. 13 (AST-0754223) of the Cooperative Agreement No. AST-0132798 between the Association of Universities for Research in Astronomy (AURA) and the NSF.
NASA Astrophysics Data System (ADS)
Bianchi, R. M.; Boudreau, J.; Konstantinidis, N.; Martyniuk, A. C.; Moyse, E.; Thomas, J.; Waugh, B. M.; Yallup, D. P.; ATLAS Collaboration
2017-10-01
At the beginning, HEP experiments made use of photographical images both to record and store experimental data and to illustrate their findings. Then the experiments evolved and needed to find ways to visualize their data. With the availability of computer graphics, software packages to display event data and the detector geometry started to be developed. Here, an overview of the usage of event display tools in HEP is presented. Then the case of the ATLAS experiment is considered in more detail and two widely used event display packages are presented, Atlantis and VP1, focusing on the software technologies they employ, as well as their strengths, differences and their usage in the experiment: from physics analysis to detector development, and from online monitoring to outreach and communication. Towards the end, the other ATLAS visualization tools will be briefly presented as well. Future development plans and improvements in the ATLAS event display packages will also be discussed.
PIV/HPIV Film Analysis Software Package
NASA Technical Reports Server (NTRS)
Blackshire, James L.
1997-01-01
A PIV/HPIV film analysis software system was developed that calculates the 2-dimensional spatial autocorrelations of subregions of Particle Image Velocimetry (PIV) or Holographic Particle Image Velocimetry (HPIV) film recordings. The software controls three hardware subsystems including (1) a Kodak Megaplus 1.4 camera and EPIX 4MEG framegrabber subsystem, (2) an IEEE/Unidex 11 precision motion control subsystem, and (3) an Alacron I860 array processor subsystem. The software runs on an IBM PC/AT host computer running either the Microsoft Windows 3.1 or Windows 95 operating system. It is capable of processing five PIV or HPIV displacement vectors per second, and is completely automated with the exception of user input to a configuration file prior to analysis execution for update of various system parameters.
VRML and Collaborative Environments: New Tools for Networked Visualization
NASA Astrophysics Data System (ADS)
Crutcher, R. M.; Plante, R. L.; Rajlich, P.
We present two new applications that engage the network as a tool for astronomical research and/or education. The first is a VRML server which allows users over the Web to interactively create three-dimensional visualizations of FITS images contained in the NCSA Astronomy Digital Image Library (ADIL). The server's Web interface allows users to select images from the ADIL, fill in processing parameters, and create renderings featuring isosurfaces, slices, contours, and annotations; the often extensive computations are carried out on an NCSA SGI supercomputer server without the user having an individual account on the system. The user can then download the 3D visualizations as VRML files, which may be rotated and manipulated locally on virtually any class of computer. The second application is the ADILBrowser, a part of the NCSA Horizon Image Data Browser Java package. ADILBrowser allows a group of participants to browse images from the ADIL within a collaborative session. The collaborative environment is provided by the NCSA Habanero package which includes text and audio chat tools and a white board. The ADILBrowser is just an example of a collaborative tool that can be built with the Horizon and Habanero packages. The classes provided by these packages can be assembled to create custom collaborative applications that visualize data either from local disk or from anywhere on the network.
Macy, Jonathan T.; Chassin, Laurie; Presson, Clark C.; Yeung, Ellen
2015-01-01
Objective Test the effect of exposure to the U.S. Food and Drug Administration’s proposed graphic images with text warning statements for cigarette packages on implicit and explicit attitudes toward smoking. Design and methods A two-session web-based study was conducted with 2192 young adults 18–25 years old. During session one, demographics, smoking behavior, and baseline implicit and explicit attitudes were assessed. Session two, completed on average 18 days later, contained random assignment to viewing one of three sets of cigarette packages, graphic images with text warnings, text warnings only, or current U.S Surgeon General’s text warnings. Participants then completed post-exposure measures of implicit and explicit attitudes. ANCOVAs tested the effect of condition on the outcomes, controlling for baseline attitudes. Results Smokers who viewed packages with graphic images plus text warnings demonstrated more negative implicit attitudes compared to smokers in the other conditions (p=.004). For the entire sample, explicit attitudes were more negative for those who viewed graphic images plus text warnings compared to those who viewed current U.S. Surgeon General’s text warnings (p=.014), but there was no difference compared to those who viewed text-only warnings. Conclusion Graphic health warnings on cigarette packages can influence young adult smokers’ implicit attitudes toward smoking. PMID:26442992
NASA Astrophysics Data System (ADS)
Jermyn, Michael; Ghadyani, Hamid; Mastanduno, Michael A.; Turner, Wes; Davis, Scott C.; Dehghani, Hamid; Pogue, Brian W.
2013-08-01
Multimodal approaches that combine near-infrared (NIR) and conventional imaging modalities have been shown to improve optical parameter estimation dramatically and thus represent a prevailing trend in NIR imaging. These approaches typically involve applying anatomical templates from magnetic resonance imaging/computed tomography/ultrasound images to guide the recovery of optical parameters. However, merging these data sets using current technology requires multiple software packages, substantial expertise, significant time-commitment, and often results in unacceptably poor mesh quality for optical image reconstruction, a reality that represents a significant roadblock for translational research of multimodal NIR imaging. This work addresses these challenges directly by introducing automated digital imaging and communications in medicine image stack segmentation and a new one-click three-dimensional mesh generator optimized for multimodal NIR imaging, and combining these capabilities into a single software package (available for free download) with a streamlined workflow. Image processing time and mesh quality benchmarks were examined for four common multimodal NIR use-cases (breast, brain, pancreas, and small animal) and were compared to a commercial image processing package. Applying these tools resulted in a fivefold decrease in image processing time and 62% improvement in minimum mesh quality, in the absence of extra mesh postprocessing. These capabilities represent a significant step toward enabling translational multimodal NIR research for both expert and nonexpert users in an open-source platform.
Cengel, Ferhat
2016-01-01
Emergency physicians and radiologists have been increasingly encountering internal concealment of illegal drugs. The packages commonly contain powdered solid drugs such as cocaine, heroin, methamphetamine and hashish, but they may also contain cocaine in the liquid form. The second type of package has recently been more commonly encountered, and poses a greater diagnostic challenge. As clinical evaluation and laboratory tests frequently fail to make the correct diagnosis, imaging examination is typically required. Imaging methods assume a vital role in the diagnosis, follow-up and management. Abdominal X-ray, ultrasonography, CT and MRI are used for the imaging purposes. Among the aforementioned methods, low-dose CT is state-of-the-art in these cases. It is of paramount importance that radiologists have a full knowledge of the imaging characteristics of these packages and accurately guide physicians and security officials. PMID:26867003
An image-processing software package: UU and Fig for optical metrology applications
NASA Astrophysics Data System (ADS)
Chen, Lujie
2013-06-01
Modern optical metrology applications are largely supported by computational methods, such as phase shifting [1], Fourier Transform [2], digital image correlation [3], camera calibration [4], etc, in which image processing is a critical and indispensable component. While it is not too difficult to obtain a wide variety of image-processing programs from the internet; few are catered for the relatively special area of optical metrology. This paper introduces an image-processing software package: UU (data processing) and Fig (data rendering) that incorporates many useful functions to process optical metrological data. The cross-platform programs UU and Fig are developed based on wxWidgets. At the time of writing, it has been tested on Windows, Linux and Mac OS. The userinterface is designed to offer precise control of the underline processing procedures in a scientific manner. The data input/output mechanism is designed to accommodate diverse file formats and to facilitate the interaction with other independent programs. In terms of robustness, although the software was initially developed for personal use, it is comparably stable and accurate to most of the commercial software of similar nature. In addition to functions for optical metrology, the software package has a rich collection of useful tools in the following areas: real-time image streaming from USB and GigE cameras, computational geometry, computer vision, fitting of data, 3D image processing, vector image processing, precision device control (rotary stage, PZT stage, etc), point cloud to surface reconstruction, volume rendering, batch processing, etc. The software package is currently used in a number of universities for teaching and research.
The Cooperative VAS Program with the Marshall Space Flight Center
NASA Technical Reports Server (NTRS)
Diak, George R.; Menzel, W. Paul
1988-01-01
Work was divided between the analysis/forecast model development and evaluation of the impact of satellite data in mesoscale numerical weather prediction (NWP), development of the Multispectral Atmospheric Mapping Sensor (MAMS), and other related research. The Cooperative Institute for Meteorological Satellite Studies (CIMSS) Synoptic Scale Model (SSM) has progressed from a relatively basic analysis/forecast system to a package which includes such features as nonlinear vertical mode initialization, comprehensive Planetary Boundary Layer (PBL) physics, and the core of a fully four-dimensional data assimilation package. The MAMS effort has produced a calibrated visible and infrared sensor that produces imager at high spatial resolution. The MAMS was developed in order to study small scale atmospheric moisture variability, to monitor and classify clouds, and to investigate the role of surface characteristics in the production of clouds, precipitation, and severe storms.
Three-dimensional reconstruction for coherent diffraction patterns obtained by XFEL.
Nakano, Miki; Miyashita, Osamu; Jonic, Slavica; Song, Changyong; Nam, Daewoong; Joti, Yasumasa; Tama, Florence
2017-07-01
The three-dimensional (3D) structural analysis of single particles using an X-ray free-electron laser (XFEL) is a new structural biology technique that enables observations of molecules that are difficult to crystallize, such as flexible biomolecular complexes and living tissue in the state close to physiological conditions. In order to restore the 3D structure from the diffraction patterns obtained by the XFEL, computational algorithms are necessary as the orientation of the incident beam with respect to the sample needs to be estimated. A program package for XFEL single-particle analysis based on the Xmipp software package, that is commonly used for image processing in 3D cryo-electron microscopy, has been developed. The reconstruction program has been tested using diffraction patterns of an aerosol nanoparticle obtained by tomographic coherent X-ray diffraction microscopy.
Mcclenny, Levi D; Imani, Mahdi; Braga-Neto, Ulisses M
2017-11-25
Gene regulatory networks govern the function of key cellular processes, such as control of the cell cycle, response to stress, DNA repair mechanisms, and more. Boolean networks have been used successfully in modeling gene regulatory networks. In the Boolean network model, the transcriptional state of each gene is represented by 0 (inactive) or 1 (active), and the relationship among genes is represented by logical gates updated at discrete time points. However, the Boolean gene states are never observed directly, but only indirectly and incompletely through noisy measurements based on expression technologies such as cDNA microarrays, RNA-Seq, and cell imaging-based assays. The Partially-Observed Boolean Dynamical System (POBDS) signal model is distinct from other deterministic and stochastic Boolean network models in removing the requirement of a directly observable Boolean state vector and allowing uncertainty in the measurement process, addressing the scenario encountered in practice in transcriptomic analysis. BoolFilter is an R package that implements the POBDS model and associated algorithms for state and parameter estimation. It allows the user to estimate the Boolean states, network topology, and measurement parameters from time series of transcriptomic data using exact and approximated (particle) filters, as well as simulate the transcriptomic data for a given Boolean network model. Some of its infrastructure, such as the network interface, is the same as in the previously published R package for Boolean Networks BoolNet, which enhances compatibility and user accessibility to the new package. We introduce the R package BoolFilter for Partially-Observed Boolean Dynamical Systems (POBDS). The BoolFilter package provides a useful toolbox for the bioinformatics community, with state-of-the-art algorithms for simulation of time series transcriptomic data as well as the inverse process of system identification from data obtained with various expression technologies such as cDNA microarrays, RNA-Seq, and cell imaging-based assays.
Giang, Kim Bao; Chung, Le Hong; Minh, Hoang Van; Kien, Vu Duy; Giap, Vu Van; Hinh, Nguyen Duc; Cuong, Nguyen Manh; Manh, Pham Duc; Duc, Ha Anh; Yang, Jui-Chen
2016-01-01
Graphic health warnings (GHW) on tobacco packages have proven to be effective in increasing quit attempts among smokers and reducing initial smoking among adolescents. This research aimed to examine the relative importance of different attributes of graphic health warnings on tobacco packages in Viet Nam. A discrete choice experimental (DCE) design was applied with a conditional logit model. In addition, a ranking method was used to list from the least to the most dreadful GHW labels. With the results from DCE model, graphic type was shown to be the most important attribute, followed by cost and coverage area of GHW. The least important attribute was position of the GHW. Among 5 graphic types (internal lung cancer image, external damaged teeth, abstract image, human suffering image and text), the image of lung cancer was found to have the strongest influence on both smokers and non-smokers. With ranking method, the image of throat cancer and heart diseases were considered the most dreadful images. GHWs should be designed with these attributes in mind, to maximise influence on purchase among both smokers and non-smokers.
Methods for scalar-on-function regression.
Reiss, Philip T; Goldsmith, Jeff; Shang, Han Lin; Ogden, R Todd
2017-08-01
Recent years have seen an explosion of activity in the field of functional data analysis (FDA), in which curves, spectra, images, etc. are considered as basic functional data units. A central problem in FDA is how to fit regression models with scalar responses and functional data points as predictors. We review some of the main approaches to this problem, categorizing the basic model types as linear, nonlinear and nonparametric. We discuss publicly available software packages, and illustrate some of the procedures by application to a functional magnetic resonance imaging dataset.
Tanpitukpongse, T P; Mazurowski, M A; Ikhena, J; Petrella, J R
2017-03-01
Alzheimer disease is a prevalent neurodegenerative disease. Computer assessment of brain atrophy patterns can help predict conversion to Alzheimer disease. Our aim was to assess the prognostic efficacy of individual-versus-combined regional volumetrics in 2 commercially available brain volumetric software packages for predicting conversion of patients with mild cognitive impairment to Alzheimer disease. Data were obtained through the Alzheimer's Disease Neuroimaging Initiative. One hundred ninety-two subjects (mean age, 74.8 years; 39% female) diagnosed with mild cognitive impairment at baseline were studied. All had T1-weighted MR imaging sequences at baseline and 3-year clinical follow-up. Analysis was performed with NeuroQuant and Neuroreader. Receiver operating characteristic curves assessing the prognostic efficacy of each software package were generated by using a univariable approach using individual regional brain volumes and 2 multivariable approaches (multiple regression and random forest), combining multiple volumes. On univariable analysis of 11 NeuroQuant and 11 Neuroreader regional volumes, hippocampal volume had the highest area under the curve for both software packages (0.69, NeuroQuant; 0.68, Neuroreader) and was not significantly different ( P > .05) between packages. Multivariable analysis did not increase the area under the curve for either package (0.63, logistic regression; 0.60, random forest NeuroQuant; 0.65, logistic regression; 0.62, random forest Neuroreader). Of the multiple regional volume measures available in FDA-cleared brain volumetric software packages, hippocampal volume remains the best single predictor of conversion of mild cognitive impairment to Alzheimer disease at 3-year follow-up. Combining volumetrics did not add additional prognostic efficacy. Therefore, future prognostic studies in mild cognitive impairment, combining such tools with demographic and other biomarker measures, are justified in using hippocampal volume as the only volumetric biomarker. © 2017 by American Journal of Neuroradiology.
Carlton, Holly D.; Elmer, John W.; Li, Yan; ...
2016-04-13
For this study synchrotron radiation micro-tomography, a non-destructive three-dimensional imaging technique, is employed to investigate an entire microelectronic package with a cross-sectional area of 16 x 16 mm. Due to the synchrotron’s high flux and brightness the sample was imaged in just 3 minutes with an 8.7 μm spatial resolution.
AE (Acoustic Emission) for Flip-Chip CGA/FCBGA Defect Detection
NASA Technical Reports Server (NTRS)
Ghaffarian, Reza
2014-01-01
C-mode scanning acoustic microscopy (C-SAM) is a nondestructive inspection technique that uses ultrasound to show the internal feature of a specimen. A very high or ultra-high-frequency ultrasound passes through a specimen to produce a visible acoustic microimage (AMI) of its inner features. As ultrasound travels into a specimen, the wave is absorbed, scattered or reflected. The response is highly sensitive to the elastic properties of the materials and is especially sensitive to air gaps. This specific characteristic makes AMI the preferred method for finding "air gaps" such as delamination, cracks, voids, and porosity. C-SAM analysis, which is a type of AMI, was widely used in the past for evaluation of plastic microelectronic circuits, especially for detecting delamination of direct die bonding. With the introduction of the flip-chip die attachment in a package; its use has been expanded to nondestructive characterization of the flip-chip solder bumps and underfill. Figure 1.1 compares visual and C-SAM inspection approaches for defect detection, especially for solder joint interconnections and hidden defects. C-SAM is specifically useful for package features like internal cracks and delamination. C-SAM not only allows for the visualization of the interior features, it has the ability to produce images on layer-by-layer basis. Visual inspection; however, is only superior to C-SAM for the exposed features including solder dewetting, microcracks, and contamination. Ideally, a combination of various inspection techniques - visual, optical and SEM microscopy, C-SAM, and X-ray - need to be performed in order to assure quality at part, package, and system levels. This reports presents evaluations performed on various advanced packages/assemblies, especially the flip-chip die version of ball grid array/column grid array (BGA/CGA) using C-SAM equipment. Both external and internal equipment was used for evaluation. The outside facility provided images of the key features that could be detected using the most advanced C-SAM equipment with a skilled operator. Investigation continued using in-house equipment with its limitations. For comparison, representative X-rays of the assemblies were also gathered to show key defect detection features of these non-destructive techniques. Key images gathered and compared are: Compared the images of 2D X-ray and C-SAM for a plastic LGA assembly showing features that could be detected by either NDE technique. For this specific case, X-ray was a clear winner. Evaluated flip-chip CGA and FCBGA assemblies with and without heat sink by C-SAM. Only the FCCGA package that had no heat sink could be fully analyzed for underfill and bump quality. Cross-sectional microscopy did not revealed peripheral delamination features detected by C-SAM. Analyzed a number of fine pitch PBGA assemblies by C-SAM. Even though the internal features of the package assemblies could be detected, C-SAM was unable to detect solder joint failure at either the package or board level. Twenty times touch ups by solder iron with 700degF tip temperature, each with about 5 second duration, did not induce defects to be detected by C-SAM images. Other techniques need to be considered to induce known defects for characterization. Given NASA's emphasis on the use of microelectronic packages and assemblies and quality assurance on workmanship defect detection, understanding key features of various inspection systems that detect defects in the early stages of package and assembly is critical to developing approaches that will minimize future failures. Additional specific, tailored non-destructive inspection approaches could enable low-risk insertion of these advanced electronic packages having hidden and fine features.
PANDA: a pipeline toolbox for analyzing brain diffusion images
Cui, Zaixu; Zhong, Suyu; Xu, Pengfei; He, Yong; Gong, Gaolang
2013-01-01
Diffusion magnetic resonance imaging (dMRI) is widely used in both scientific research and clinical practice in in-vivo studies of the human brain. While a number of post-processing packages have been developed, fully automated processing of dMRI datasets remains challenging. Here, we developed a MATLAB toolbox named “Pipeline for Analyzing braiN Diffusion imAges” (PANDA) for fully automated processing of brain diffusion images. The processing modules of a few established packages, including FMRIB Software Library (FSL), Pipeline System for Octave and Matlab (PSOM), Diffusion Toolkit and MRIcron, were employed in PANDA. Using any number of raw dMRI datasets from different subjects, in either DICOM or NIfTI format, PANDA can automatically perform a series of steps to process DICOM/NIfTI to diffusion metrics [e.g., fractional anisotropy (FA) and mean diffusivity (MD)] that are ready for statistical analysis at the voxel-level, the atlas-level and the Tract-Based Spatial Statistics (TBSS)-level and can finish the construction of anatomical brain networks for all subjects. In particular, PANDA can process different subjects in parallel, using multiple cores either in a single computer or in a distributed computing environment, thus greatly reducing the time cost when dealing with a large number of datasets. In addition, PANDA has a friendly graphical user interface (GUI), allowing the user to be interactive and to adjust the input/output settings, as well as the processing parameters. As an open-source package, PANDA is freely available at http://www.nitrc.org/projects/panda/. This novel toolbox is expected to substantially simplify the image processing of dMRI datasets and facilitate human structural connectome studies. PMID:23439846
NASA Technical Reports Server (NTRS)
1981-01-01
The software developed to simulate the ground control point navigation system is described. The Ground Control Point Simulation Program (GCPSIM) is designed as an analysis tool to predict the performance of the navigation system. The system consists of two star trackers, a global positioning system receiver, a gyro package, and a landmark tracker.
Development of a ROV Deployed Video Analysis Tool for Rapid Measurement of Submerged Oil/Gas Leaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savas, Omer
Expanded deep sea drilling around the globe makes it necessary to have readily available tools to quickly and accurately measure discharge rates from accidental submerged oil/gas leak jets for the first responders to deploy adequate resources for containment. We have developed and tested a field deployable video analysis software package which is able to provide in the field sufficiently accurate flow rate estimates for initial responders in accidental oil discharges in submarine operations. The essence of our approach is based on tracking coherent features at the interface in the near field of immiscible turbulent jets. The software package, UCB_Plume, ismore » ready to be used by the first responders for field implementation. We have tested the tool on submerged water and oil jets which are made visible using fluorescent dyes. We have been able to estimate the discharge rate within 20% accuracy. A high end WINDOWS laptop computer is suggested as the operating platform and a USB connected high speed, high resolution monochrome camera as the imaging device are sufficient for acquiring flow images under continuous unidirectional illumination and running the software in the field. Results are obtained over a matter of minutes.« less
Jones, Christopher P; Brenner, Ceri M; Stitt, Camilla A; Armstrong, Chris; Rusby, Dean R; Mirfayzi, Seyed R; Wilson, Lucy A; Alejo, Aarón; Ahmed, Hamad; Allott, Ric; Butler, Nicholas M H; Clarke, Robert J; Haddock, David; Hernandez-Gomez, Cristina; Higginson, Adam; Murphy, Christopher; Notley, Margaret; Paraskevoulakos, Charilaos; Jowsey, John; McKenna, Paul; Neely, David; Kar, Satya; Scott, Thomas B
2016-11-15
A small scale sample nuclear waste package, consisting of a 28mm diameter uranium penny encased in grout, was imaged by absorption contrast radiography using a single pulse exposure from an X-ray source driven by a high-power laser. The Vulcan laser was used to deliver a focused pulse of photons to a tantalum foil, in order to generate a bright burst of highly penetrating X-rays (with energy >500keV), with a source size of <0.5mm. BAS-TR and BAS-SR image plates were used for image capture, alongside a newly developed Thalium doped Caesium Iodide scintillator-based detector coupled to CCD chips. The uranium penny was clearly resolved to sub-mm accuracy over a 30cm(2) scan area from a single shot acquisition. In addition, neutron generation was demonstrated in situ with the X-ray beam, with a single shot, thus demonstrating the potential for multi-modal criticality testing of waste materials. This feasibility study successfully demonstrated non-destructive radiography of encapsulated, high density, nuclear material. With recent developments of high-power laser systems, to 10Hz operation, a laser-driven multi-modal beamline for waste monitoring applications is envisioned. Copyright © 2016. Published by Elsevier B.V.
Multispectral laser imaging for advanced food analysis
NASA Astrophysics Data System (ADS)
Senni, L.; Burrascano, P.; Ricci, M.
2016-07-01
A hardware-software apparatus for food inspection capable of realizing multispectral NIR laser imaging at four different wavelengths is herein discussed. The system was designed to operate in a through-transmission configuration to detect the presence of unwanted foreign bodies inside samples, whether packed or unpacked. A modified Lock-In technique was employed to counterbalance the significant signal intensity attenuation due to transmission across the sample and to extract the multispectral information more efficiently. The NIR laser wavelengths used to acquire the multispectral images can be varied to deal with different materials and to focus on specific aspects. In the present work the wavelengths were selected after a preliminary analysis to enhance the image contrast between foreign bodies and food in the sample, thus identifying the location and nature of the defects. Experimental results obtained from several specimens, with and without packaging, are presented and the multispectral image processing as well as the achievable spatial resolution of the system are discussed.
SimVascular: An Open Source Pipeline for Cardiovascular Simulation.
Updegrove, Adam; Wilson, Nathan M; Merkow, Jameson; Lan, Hongzhi; Marsden, Alison L; Shadden, Shawn C
2017-03-01
Patient-specific cardiovascular simulation has become a paradigm in cardiovascular research and is emerging as a powerful tool in basic, translational and clinical research. In this paper we discuss the recent development of a fully open-source SimVascular software package, which provides a complete pipeline from medical image data segmentation to patient-specific blood flow simulation and analysis. This package serves as a research tool for cardiovascular modeling and simulation, and has contributed to numerous advances in personalized medicine, surgical planning and medical device design. The SimVascular software has recently been refactored and expanded to enhance functionality, usability, efficiency and accuracy of image-based patient-specific modeling tools. Moreover, SimVascular previously required several licensed components that hindered new user adoption and code management and our recent developments have replaced these commercial components to create a fully open source pipeline. These developments foster advances in cardiovascular modeling research, increased collaboration, standardization of methods, and a growing developer community.
Development and validation of an open source quantification tool for DSC-MRI studies.
Gordaliza, P M; Mateos-Pérez, J M; Montesinos, P; Guzmán-de-Villoria, J A; Desco, M; Vaquero, J J
2015-03-01
This work presents the development of an open source tool for the quantification of dynamic susceptibility-weighted contrast-enhanced (DSC) perfusion studies. The development of this tool is motivated by the lack of open source tools implemented on open platforms to allow external developers to implement their own quantification methods easily and without the need of paying for a development license. This quantification tool was developed as a plugin for the ImageJ image analysis platform using the Java programming language. A modular approach was used in the implementation of the components, in such a way that the addition of new methods can be done without breaking any of the existing functionalities. For the validation process, images from seven patients with brain tumors were acquired and quantified with the presented tool and with a widely used clinical software package. The resulting perfusion parameters were then compared. Perfusion parameters and the corresponding parametric images were obtained. When no gamma-fitting is used, an excellent agreement with the tool used as a gold-standard was obtained (R(2)>0.8 and values are within 95% CI limits in Bland-Altman plots). An open source tool that performs quantification of perfusion studies using magnetic resonance imaging has been developed and validated using a clinical software package. It works as an ImageJ plugin and the source code has been published with an open source license. Copyright © 2015 Elsevier Ltd. All rights reserved.
Wakefield, M A; Germain, D; Durkin, S J
2008-01-01
Background: Cigarette packaging is a key marketing strategy for promoting brand image. Plain packaging has been proposed to limit brand image, but tobacco companies would resist removal of branding design elements. Method: A 3 (brand types) × 4 (degree of plain packaging) between-subject experimental design was used, using an internet online method, to expose 813 adult Australian smokers to one randomly selected cigarette pack, after which respondents completed ratings of the pack. Results: Compared with current cigarette packs with full branding, cigarette packs that displayed progressively fewer branding design elements were perceived increasingly unfavourably in terms of smokers’ appraisals of the packs, the smokers who might smoke such packs, and the inferred experience of smoking a cigarette from these packs. For example, cardboard brown packs with the number of enclosed cigarettes displayed on the front of the pack and featuring only the brand name in small standard font at the bottom of the pack face were rated as significantly less attractive and popular than original branded packs. Smokers of these plain packs were rated as significantly less trendy/stylish, less sociable/outgoing and less mature than smokers of the original pack. Compared with original packs, smokers inferred that cigarettes from these plain packs would be less rich in tobacco, less satisfying and of lower quality tobacco. Conclusion: Plain packaging policies that remove most brand design elements are likely to be most successful in removing cigarette brand image associations. PMID:18827035
Wakefield, M A; Germain, D; Durkin, S J
2008-12-01
Cigarette packaging is a key marketing strategy for promoting brand image. Plain packaging has been proposed to limit brand image, but tobacco companies would resist removal of branding design elements. A 3 (brand types) x 4 (degree of plain packaging) between-subject experimental design was used, using an internet online method, to expose 813 adult Australian smokers to one randomly selected cigarette pack, after which respondents completed ratings of the pack. Compared with current cigarette packs with full branding, cigarette packs that displayed progressively fewer branding design elements were perceived increasingly unfavourably in terms of smokers' appraisals of the packs, the smokers who might smoke such packs, and the inferred experience of smoking a cigarette from these packs. For example, cardboard brown packs with the number of enclosed cigarettes displayed on the front of the pack and featuring only the brand name in small standard font at the bottom of the pack face were rated as significantly less attractive and popular than original branded packs. Smokers of these plain packs were rated as significantly less trendy/stylish, less sociable/outgoing and less mature than smokers of the original pack. Compared with original packs, smokers inferred that cigarettes from these plain packs would be less rich in tobacco, less satisfying and of lower quality tobacco. Plain packaging policies that remove most brand design elements are likely to be most successful in removing cigarette brand image associations.
NASA Astrophysics Data System (ADS)
Ahi, Kiarash; Shahbazmohamadi, Sina; Asadizanjani, Navid
2018-05-01
In this paper, a comprehensive set of techniques for quality control and authentication of packaged integrated circuits (IC) using terahertz (THz) time-domain spectroscopy (TDS) is developed. By material characterization, the presence of unexpected materials in counterfeit components is revealed. Blacktopping layers are detected using THz time-of-flight tomography, and thickness of hidden layers is measured. Sanded and contaminated components are detected by THz reflection-mode imaging. Differences between inside structures of counterfeit and authentic components are revealed through developing THz transmission imaging. For enabling accurate measurement of features by THz transmission imaging, a novel resolution enhancement technique (RET) has been developed. This RET is based on deconvolution of the THz image and the THz point spread function (PSF). The THz PSF is mathematically modeled through incorporating the spectrum of the THz imaging system, the axis of propagation of the beam, and the intensity extinction coefficient of the object into a Gaussian beam distribution. As a result of implementing this RET, the accuracy of the measurements on THz images has been improved from 2.4 mm to 0.1 mm and bond wires as small as 550 μm inside the packaging of the ICs are imaged.
Quantification of regional fat volume in rat MRI
NASA Astrophysics Data System (ADS)
Sacha, Jaroslaw P.; Cockman, Michael D.; Dufresne, Thomas E.; Trokhan, Darren
2003-05-01
Multiple initiatives in the pharmaceutical and beauty care industries are directed at identifying therapies for weight management. Body composition measurements are critical for such initiatives. Imaging technologies that can be used to measure body composition noninvasively include DXA (dual energy x-ray absorptiometry) and MRI (magnetic resonance imaging). Unlike other approaches, MRI provides the ability to perform localized measurements of fat distribution. Several factors complicate the automatic delineation of fat regions and quantification of fat volumes. These include motion artifacts, field non-uniformity, brightness and contrast variations, chemical shift misregistration, and ambiguity in delineating anatomical structures. We have developed an approach to deal practically with those challenges. The approach is implemented in a package, the Fat Volume Tool, for automatic detection of fat tissue in MR images of the rat abdomen, including automatic discrimination between abdominal and subcutaneous regions. We suppress motion artifacts using masking based on detection of implicit landmarks in the images. Adaptive object extraction is used to compensate for intensity variations. This approach enables us to perform fat tissue detection and quantification in a fully automated manner. The package can also operate in manual mode, which can be used for verification of the automatic analysis or for performing supervised segmentation. In supervised segmentation, the operator has the ability to interact with the automatic segmentation procedures to touch-up or completely overwrite intermediate segmentation steps. The operator's interventions steer the automatic segmentation steps that follow. This improves the efficiency and quality of the final segmentation. Semi-automatic segmentation tools (interactive region growing, live-wire, etc.) improve both the accuracy and throughput of the operator when working in manual mode. The quality of automatic segmentation has been evaluated by comparing the results of fully automated analysis to manual analysis of the same images. The comparison shows a high degree of correlation that validates the quality of the automatic segmentation approach.
microMS: A Python Platform for Image-Guided Mass Spectrometry Profiling
NASA Astrophysics Data System (ADS)
Comi, Troy J.; Neumann, Elizabeth K.; Do, Thanh D.; Sweedler, Jonathan V.
2017-09-01
Image-guided mass spectrometry (MS) profiling provides a facile framework for analyzing samples ranging from single cells to tissue sections. The fundamental workflow utilizes a whole-slide microscopy image to select targets of interest, determine their spatial locations, and subsequently perform MS analysis at those locations. Improving upon prior reported methodology, a software package was developed for working with microscopy images. microMS, for microscopy-guided mass spectrometry, allows the user to select and profile diverse samples using a variety of target patterns and mass analyzers. Written in Python, the program provides an intuitive graphical user interface to simplify image-guided MS for novice users. The class hierarchy of instrument interactions permits integration of new MS systems while retaining the feature-rich image analysis framework. microMS is a versatile platform for performing targeted profiling experiments using a series of mass spectrometers. The flexibility in mass analyzers greatly simplifies serial analyses of the same targets by different instruments. The current capabilities of microMS are presented, and its application for off-line analysis of single cells on three distinct instruments is demonstrated. The software has been made freely available for research purposes. [Figure not available: see fulltext.
microMS: A Python Platform for Image-Guided Mass Spectrometry Profiling.
Comi, Troy J; Neumann, Elizabeth K; Do, Thanh D; Sweedler, Jonathan V
2017-09-01
Image-guided mass spectrometry (MS) profiling provides a facile framework for analyzing samples ranging from single cells to tissue sections. The fundamental workflow utilizes a whole-slide microscopy image to select targets of interest, determine their spatial locations, and subsequently perform MS analysis at those locations. Improving upon prior reported methodology, a software package was developed for working with microscopy images. microMS, for microscopy-guided mass spectrometry, allows the user to select and profile diverse samples using a variety of target patterns and mass analyzers. Written in Python, the program provides an intuitive graphical user interface to simplify image-guided MS for novice users. The class hierarchy of instrument interactions permits integration of new MS systems while retaining the feature-rich image analysis framework. microMS is a versatile platform for performing targeted profiling experiments using a series of mass spectrometers. The flexibility in mass analyzers greatly simplifies serial analyses of the same targets by different instruments. The current capabilities of microMS are presented, and its application for off-line analysis of single cells on three distinct instruments is demonstrated. The software has been made freely available for research purposes. Graphical Abstract ᅟ.
NASA Astrophysics Data System (ADS)
Arellano-Baeza, A. A.; Garcia, R. V.; Trejo-Soto, M.; Molina-Sauceda, E.
Mexico is one of the most volcanically active regions in North America Volcanic activity in central Mexico is associated with the subduction of the Cocos and Rivera plates beneath the North American plate Periods of enhanced microseismic activity associated with the volcanic activity of the Colima and Popocapetl volcanoes are compared to some periods of low microseismic activity We detected changes in the number and orientation of lineaments associated with the microseismic activity due to lineament analysis of a temporal sequence of high resolution satellite images of both volcanoes 15 m resolution multispectral images provided by the ASTER VNIR instrument were used The Lineament Extraction and Stripes Statistic Analysis LESSA software package was employed for the lineament extraction
Report for 2011 from the Bordeaux IVS Analysis Center
NASA Technical Reports Server (NTRS)
Charlot, Patrick; Bellanger, Antoine; Bourda, Geraldine; Collioud, Arnaud; Baudry, Alain
2012-01-01
This report summarizes the activities of the Bordeaux IVS Analysis Center during the year 2011. The work focused on (i) regular analysis of the IVS-R1 and IVS-R4 sessions with the GINS software package; (ii) systematic VLBI imaging of the RDV sessions and calculation of the corresponding source structure index and compactness values; (iii) imaging of the sources observed during the 2009 International Year of Astronomy IVS observing session; and (iv) continuation of our VLBI observational program to identify optically-bright radio sources suitable for the link with the future Gaia frame. Also of importance is the enhancement of the IVS LiveWeb site which now comprises all IVS sessions back to 2003, allowing one to search past observations for session-specific information (e.g. sources or stations).
De-Deus, Gustavo; Marins, Juliana; Neves, Aline de Almeida; Reis, Claudia; Fidel, Sandra; Versiani, Marco A; Alves, Haimon; Lopes, Ricardo Tadeu; Paciornik, Sidnei
2014-02-01
The accumulation of debris occurs after root canal preparation procedures specifically in fins, isthmus, irregularities, and ramifications. The aim of this study was to present a step-by-step description of a new method used to longitudinally identify, measure, and 3-dimensionally map the accumulation of hard-tissue debris inside the root canal after biomechanical preparation using free software for image processing and analysis. Three mandibular molars presenting the mesial root with a large isthmus width and a type II Vertucci's canal configuration were selected and scanned. The specimens were assigned to 1 of 3 experimental approaches: (1) 5.25% sodium hypochlorite + 17% EDTA, (2) bidistilled water, and (3) no irrigation. After root canal preparation, high-resolution scans of the teeth were accomplished, and free software packages were used to register and quantify the amount of accumulated hard-tissue debris in either canal space or isthmus areas. Canal preparation without irrigation resulted in 34.6% of its volume filled with hard-tissue debris, whereas the use of bidistilled water or NaOCl followed by EDTA showed a reduction in the percentage volume of debris to 16% and 11.3%, respectively. The closer the distance to the isthmus area was the larger the amount of accumulated debris regardless of the irrigating protocol used. Through the present method, it was possible to calculate the volume of hard-tissue debris in the isthmuses and in the root canal space. Free-software packages used for image reconstruction, registering, and analysis have shown to be promising for end-user application. Copyright © 2014. Published by Elsevier Inc.
Introduction of A New Toolbox for Processing Digital Images From Multiple Camera Networks: FMIPROT
NASA Astrophysics Data System (ADS)
Melih Tanis, Cemal; Nadir Arslan, Ali
2017-04-01
Webcam networks intended for scientific monitoring of ecosystems is providing digital images and other environmental data for various studies. Also, other types of camera networks can also be used for scientific purposes, e.g. usage of traffic webcams for phenological studies, camera networks for ski tracks and avalanche monitoring over mountains for hydrological studies. To efficiently harness the potential of these camera networks, easy to use software which can obtain and handle images from different networks having different protocols and standards is necessary. For the analyses of the images from webcam networks, numerous software packages are freely available. These software packages have different strong features not only for analyzing but also post processing digital images. But specifically for the ease of use, applicability and scalability, a different set of features could be added. Thus, a more customized approach would be of high value, not only for analyzing images of comprehensive camera networks, but also considering the possibility to create operational data extraction and processing with an easy to use toolbox. At this paper, we introduce a new toolbox, entitled; Finnish Meteorological Institute Image PROcessing Tool (FMIPROT) which a customized approach is followed. FMIPROT has currently following features: • straightforward installation, • no software dependencies that require as extra installations, • communication with multiple camera networks, • automatic downloading and handling images, • user friendly and simple user interface, • data filtering, • visualizing results on customizable plots, • plugins; allows users to add their own algorithms. Current image analyses in FMIPROT include "Color Fraction Extraction" and "Vegetation Indices". The analysis of color fraction extraction is calculating the fractions of the colors in a region of interest, for red, green and blue colors along with brightness and luminance parameters. The analysis of vegetation indices is a collection of indices used in vegetation phenology and includes "Green Fraction" (green chromatic coordinate), "Green-Red Vegetation Index" and "Green Excess Index". "Snow cover fraction" analysis which detects snow covered pixels in the images and georeference them on a geospatial plane to calculate the snow cover fraction is being implemented at the moment. FMIPROT is being developed during the EU Life+ MONIMET project. Altogether we mounted 28 cameras at 14 different sites in Finland as MONIMET camera network. In this paper, we will present details of FMIPROT and analysis results from MONIMET camera network. We will also discuss on future planned developments of FMIPROT.
A New Image Processing and GIS Package
NASA Technical Reports Server (NTRS)
Rickman, D.; Luvall, J. C.; Cheng, T.
1998-01-01
The image processing and GIS package ELAS was developed during the 1980's by NASA. It proved to be a popular, influential and powerful in the manipulation of digital imagery. Before the advent of PC's it was used by hundreds of institutions, mostly schools. It is the unquestioned, direct progenitor or two commercial GIS remote sensing packages, ERDAS and MapX and influenced others, such as PCI. Its power was demonstrated by its use for work far beyond its original purpose, having worked several different types of medical imagery, photomicrographs of rock, images of turtle flippers and numerous other esoteric imagery. Although development largely stopped in the early 1990's the package still offers as much or more power and flexibility than any other roughly comparable package, public or commercial. It is a huge body or code, representing more than a decade of work by full time, professional programmers. The current versions all have several deficiencies compared to current software standards and usage, notably its strictly command line interface. In order to support their research needs the authors are in the process of fundamentally changing ELAS, and in the process greatly increasing its power, utility, and ease of use. The new software is called ELAS II. This paper discusses the design of ELAS II.
Scalable computing for evolutionary genomics.
Prins, Pjotr; Belhachemi, Dominique; Möller, Steffen; Smant, Geert
2012-01-01
Genomic data analysis in evolutionary biology is becoming so computationally intensive that analysis of multiple hypotheses and scenarios takes too long on a single desktop computer. In this chapter, we discuss techniques for scaling computations through parallelization of calculations, after giving a quick overview of advanced programming techniques. Unfortunately, parallel programming is difficult and requires special software design. The alternative, especially attractive for legacy software, is to introduce poor man's parallelization by running whole programs in parallel as separate processes, using job schedulers. Such pipelines are often deployed on bioinformatics computer clusters. Recent advances in PC virtualization have made it possible to run a full computer operating system, with all of its installed software, on top of another operating system, inside a "box," or virtual machine (VM). Such a VM can flexibly be deployed on multiple computers, in a local network, e.g., on existing desktop PCs, and even in the Cloud, to create a "virtual" computer cluster. Many bioinformatics applications in evolutionary biology can be run in parallel, running processes in one or more VMs. Here, we show how a ready-made bioinformatics VM image, named BioNode, effectively creates a computing cluster, and pipeline, in a few steps. This allows researchers to scale-up computations from their desktop, using available hardware, anytime it is required. BioNode is based on Debian Linux and can run on networked PCs and in the Cloud. Over 200 bioinformatics and statistical software packages, of interest to evolutionary biology, are included, such as PAML, Muscle, MAFFT, MrBayes, and BLAST. Most of these software packages are maintained through the Debian Med project. In addition, BioNode contains convenient configuration scripts for parallelizing bioinformatics software. Where Debian Med encourages packaging free and open source bioinformatics software through one central project, BioNode encourages creating free and open source VM images, for multiple targets, through one central project. BioNode can be deployed on Windows, OSX, Linux, and in the Cloud. Next to the downloadable BioNode images, we provide tutorials online, which empower bioinformaticians to install and run BioNode in different environments, as well as information for future initiatives, on creating and building such images.
PROS: An IRAF based system for analysis of x ray data
NASA Technical Reports Server (NTRS)
Conroy, M. A.; Deponte, J.; Moran, J. F.; Orszak, J. S.; Roberts, W. P.; Schmidt, D.
1992-01-01
PROS is an IRAF based software package for the reduction and analysis of x-ray data. The use of a standard, portable, integrated environment provides for both multi-frequency and multi-mission analysis. The analysis of x-ray data differs from optical analysis due to the nature of the x-ray data and its acquisition during constantly varying conditions. The scarcity of data, the low signal-to-noise ratio and the large gaps in exposure time make data screening and masking an important part of the analysis. PROS was developed to support the analysis of data from the ROSAT and Einstein missions but many of the tasks have been used on data from other missions. IRAF/PROS provides a complete end-to-end system for x-ray data analysis: (1) a set of tools for importing and exporting data via FITS format -- in particular, IRAF provides a specialized event-list format, QPOE, that is compatible with its IMAGE (2-D array) format; (2) a powerful set of IRAF system capabilities for both temporal and spatial event filtering; (3) full set of imaging and graphics tasks; (4) specialized packages for scientific analysis such as spatial, spectral and timing analysis -- these consist of both general and mission specific tasks; and (5) complete system support including ftp and magnetic tape releases, electronic and conventional mail hotline support, electronic mail distribution of solutions to frequently asked questions and current known bugs. We will discuss the philosophy, architecture and development environment used by PROS to generate a portable, multimission software environment. PROS is available on all platforms that support IRAF, including Sun/Unix, VAX/VMS, HP, and Decstations. It is available on request at no charge.
NASA Technical Reports Server (NTRS)
Djorgovski, S. G.
1994-01-01
We developed a package to process and analyze the data from the digital version of the Second Palomar Sky Survey. This system, called SKICAT, incorporates the latest in machine learning and expert systems software technology, in order to classify the detected objects objectively and uniformly, and facilitate handling of the enormous data sets from digital sky surveys and other sources. The system provides a powerful, integrated environment for the manipulation and scientific investigation of catalogs from virtually any source. It serves three principal functions: image catalog construction, catalog management, and catalog analysis. Through use of the GID3* Decision Tree artificial induction software, SKICAT automates the process of classifying objects within CCD and digitized plate images. To exploit these catalogs, the system also provides tools to merge them into a large, complex database which may be easily queried and modified when new data or better methods of calibrating or classifying become available. The most innovative feature of SKICAT is the facility it provides to experiment with and apply the latest in machine learning technology to the tasks of catalog construction and analysis. SKICAT provides a unique environment for implementing these tools for any number of future scientific purposes. Initial scientific verification and performance tests have been made using galaxy counts and measurements of galaxy clustering from small subsets of the survey data, and a search for very high redshift quasars. All of the tests were successful and produced new and interesting scientific results. Attachments to this report give detailed accounts of the technical aspects of the SKICAT system, and of some of the scientific results achieved to date. We also developed a user-friendly package for multivariate statistical analysis of small and moderate-size data sets, called STATPROG. The package was tested extensively on a number of real scientific applications and has produced real, published results.
An Uneasy Assemblage: Prisoners, Animals, Asylum-Seeking Children and Posthuman Packaging
ERIC Educational Resources Information Center
Bone, Jane; Blaise, Mindy
2015-01-01
Events in Australia have acted as provocations to thinking about the consequences of becoming a "package" and then being processed. The image of the human, as prisoner, together with narratives about the child and the nonhuman animal as package, are used here in order to understand the world we share with others. These disparate elements…
Event time analysis of longitudinal neuroimage data.
Sabuncu, Mert R; Bernal-Rusiel, Jorge L; Reuter, Martin; Greve, Douglas N; Fischl, Bruce
2014-08-15
This paper presents a method for the statistical analysis of the associations between longitudinal neuroimaging measurements, e.g., of cortical thickness, and the timing of a clinical event of interest, e.g., disease onset. The proposed approach consists of two steps, the first of which employs a linear mixed effects (LME) model to capture temporal variation in serial imaging data. The second step utilizes the extended Cox regression model to examine the relationship between time-dependent imaging measurements and the timing of the event of interest. We demonstrate the proposed method both for the univariate analysis of image-derived biomarkers, e.g., the volume of a structure of interest, and the exploratory mass-univariate analysis of measurements contained in maps, such as cortical thickness and gray matter density. The mass-univariate method employs a recently developed spatial extension of the LME model. We applied our method to analyze structural measurements computed using FreeSurfer, a widely used brain Magnetic Resonance Image (MRI) analysis software package. We provide a quantitative and objective empirical evaluation of the statistical performance of the proposed method on longitudinal data from subjects suffering from Mild Cognitive Impairment (MCI) at baseline. Copyright © 2014 Elsevier Inc. All rights reserved.
Consumer perceptions on sustainable practices implemented in foodservice organizations in Korea
Ju, Seyoung
2016-01-01
BACKGROUND/OBJECTIVES Sustainable practices in foodservice organizations including commercial and noncommercial ones are critical to ensure the protection of the environment for the future. With the rapid growth of the foodservice industry, wiser usage of input sources such as food, utilities, and single use packaging should be reconsidered for future generations. Therefore, this study aims to investigate the customer's perceptions on sustainable practices and to identify the relationship among sustainable practices, social contribution and purchase intention. SUBJECTS/METHODS The study was conducted using content analyses by reviewing articles on sustainable food service practices published domestically and abroad. Thereafter, data were collected with a face-to-face survey using a questionnaire and analyzed with factor analyses and multiple regressions. RESULTS Sustainable practices classified with factor analysis consisted of 6 dimensions of green food material procurement, sustainable food preparation, green packaging, preservation of energy, waste management, and public relations on green activity, with a total of 25 green activities in foodservice operations. Consumers were not very familiar with the green activities implemented in the foodservice unit, with the lowest awareness of "green food material procurement (2.46 out of 5 points)", and the highest awareness of "green packaging (3.74)" and "waste management (3.28). The factors influencing the perception of social contribution by foodservice organizations among 6 sustainable practice dimensions were found to be public relations on green activity (β = 0.154), waste management (β = 0.204) and sustainable food preparation (β = 0.183). Green packaging (β = 0.107) and the social contribution of the foodservice organization (β = 0.761) had strong relationships with the image of the organization. The purchase intentions of customers was affected only by the foodservice image (β = 0.775). CONCLUSIONS The results of this study suggest that sustainable practices by foodservice organization present a good image to customers and increase the awareness of valuable contributions that benefit the customer as well as the community. PMID:26865923
Consumer perceptions on sustainable practices implemented in foodservice organizations in Korea.
Ju, Seyoung; Chang, Hyeja
2016-02-01
Sustainable practices in foodservice organizations including commercial and noncommercial ones are critical to ensure the protection of the environment for the future. With the rapid growth of the foodservice industry, wiser usage of input sources such as food, utilities, and single use packaging should be reconsidered for future generations. Therefore, this study aims to investigate the customer's perceptions on sustainable practices and to identify the relationship among sustainable practices, social contribution and purchase intention. The study was conducted using content analyses by reviewing articles on sustainable food service practices published domestically and abroad. Thereafter, data were collected with a face-to-face survey using a questionnaire and analyzed with factor analyses and multiple regressions. Sustainable practices classified with factor analysis consisted of 6 dimensions of green food material procurement, sustainable food preparation, green packaging, preservation of energy, waste management, and public relations on green activity, with a total of 25 green activities in foodservice operations. Consumers were not very familiar with the green activities implemented in the foodservice unit, with the lowest awareness of "green food material procurement (2.46 out of 5 points)", and the highest awareness of "green packaging (3.74)" and "waste management (3.28). The factors influencing the perception of social contribution by foodservice organizations among 6 sustainable practice dimensions were found to be public relations on green activity (β = 0.154), waste management (β = 0.204) and sustainable food preparation (β = 0.183). Green packaging (β = 0.107) and the social contribution of the foodservice organization (β = 0.761) had strong relationships with the image of the organization. The purchase intentions of customers was affected only by the foodservice image (β = 0.775). The results of this study suggest that sustainable practices by foodservice organization present a good image to customers and increase the awareness of valuable contributions that benefit the customer as well as the community.
Method for 3D noncontact measurements of cut trees package area
NASA Astrophysics Data System (ADS)
Knyaz, Vladimir A.; Vizilter, Yuri V.
2001-02-01
Progress in imaging sensors and computers create the background for numerous 3D imaging application for wide variety of manufacturing activity. Many demands for automated precise measurements are in wood branch of industry. One of them is the accurate volume definition for cut trees carried on the truck. The key point for volume estimation is determination of the front area of the cut tree package. To eliminate slow and inaccurate manual measurements being now in practice the experimental system for automated non-contact wood measurements is developed. The system includes two non-metric CCD video cameras, PC as central processing unit, frame grabbers and original software for image processing and 3D measurements. The proposed method of measurement is based on capturing the stereo pair of front of trees package and performing the image orthotranformation into the front plane. This technique allows to process transformed image for circle shapes recognition and calculating their area. The metric characteristics of the system are provided by special camera calibration procedure. The paper presents the developed method of 3D measurements, describes the hardware used for image acquisition and the software realized the developed algorithms, gives the productivity and precision characteristics of the system.
Acceptance testing and commissioning of Kodak Directview CR-850 digital radiography system.
Bezak, E; Nelligan, R A
2006-03-01
This Technical Paper describes Acceptance Testing and Commissioning of the Kodak DirectView CR-850 digital radiography system installed at the Royal Adelaide Hospital. The first of its type installed in Australia, the system is a "dry" image processor, for which no chemicals are required to develop images. Rather, latent radiographic images are stored on photostimulable phosphor screens, which are scanned and displayed by a reader unit. The image can be digitally processed and enhanced before it is forwarded to a storage device, printer or workstation display, thereby alleviating the need to re-expose patients to achieve satisfactory quality images. The phosphor screens are automatically erased, ready for re-use. Results are reported of tests carried out using the optional "Total Quality Tool" quality assurance package installed with the system. This package includes analysis and reporting software which provides for simple testing and reporting of many important characteristics of the system, such as field uniformity, aspect ratio, line and pixel positions, image and system noise, exposure response, scan linearity, modulation transfer function (MTF) and image artefacts. Acceptance Tests were performed for kV and MV exposures. Resolution for MV exposures was at least 0.8 l/mm, and measured phantom dimensions were within 1.05% of expected magnification. Reproducibility between cassettes was within 1.6%. The mean pixel values on the central axis were close to linear for MV exposures from 3 to 10 MU and reached saturation level at around 20 MU for 6 MV and around 30 MV for 23 MV beams. Noise levels were below 0.2 %.
Pryor, Alan; Ophus, Colin; Miao, Jianwei
2017-10-25
Simulation of atomic-resolution image formation in scanning transmission electron microscopy can require significant computation times using traditional methods. A recently developed method, termed plane-wave reciprocal-space interpolated scattering matrix (PRISM), demonstrates potential for significant acceleration of such simulations with negligible loss of accuracy. In this paper, we present a software package called Prismatic for parallelized simulation of image formation in scanning transmission electron microscopy (STEM) using both the PRISM and multislice methods. By distributing the workload between multiple CUDA-enabled GPUs and multicore processors, accelerations as high as 1000 × for PRISM and 15 × for multislice are achieved relative to traditionalmore » multislice implementations using a single 4-GPU machine. We demonstrate a potentially important application of Prismatic, using it to compute images for atomic electron tomography at sufficient speeds to include in the reconstruction pipeline. Prismatic is freely available both as an open-source CUDA/C++ package with a graphical user interface and as a Python package, PyPrismatic.« less
Pryor, Alan; Ophus, Colin; Miao, Jianwei
2017-01-01
Simulation of atomic-resolution image formation in scanning transmission electron microscopy can require significant computation times using traditional methods. A recently developed method, termed plane-wave reciprocal-space interpolated scattering matrix (PRISM), demonstrates potential for significant acceleration of such simulations with negligible loss of accuracy. Here, we present a software package called Prismatic for parallelized simulation of image formation in scanning transmission electron microscopy (STEM) using both the PRISM and multislice methods. By distributing the workload between multiple CUDA-enabled GPUs and multicore processors, accelerations as high as 1000 × for PRISM and 15 × for multislice are achieved relative to traditional multislice implementations using a single 4-GPU machine. We demonstrate a potentially important application of Prismatic , using it to compute images for atomic electron tomography at sufficient speeds to include in the reconstruction pipeline. Prismatic is freely available both as an open-source CUDA/C++ package with a graphical user interface and as a Python package, PyPrismatic .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pryor, Alan; Ophus, Colin; Miao, Jianwei
Simulation of atomic-resolution image formation in scanning transmission electron microscopy can require significant computation times using traditional methods. A recently developed method, termed plane-wave reciprocal-space interpolated scattering matrix (PRISM), demonstrates potential for significant acceleration of such simulations with negligible loss of accuracy. In this paper, we present a software package called Prismatic for parallelized simulation of image formation in scanning transmission electron microscopy (STEM) using both the PRISM and multislice methods. By distributing the workload between multiple CUDA-enabled GPUs and multicore processors, accelerations as high as 1000 × for PRISM and 15 × for multislice are achieved relative to traditionalmore » multislice implementations using a single 4-GPU machine. We demonstrate a potentially important application of Prismatic, using it to compute images for atomic electron tomography at sufficient speeds to include in the reconstruction pipeline. Prismatic is freely available both as an open-source CUDA/C++ package with a graphical user interface and as a Python package, PyPrismatic.« less
The NJOY Nuclear Data Processing System, Version 2016
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macfarlane, Robert; Muir, Douglas W.; Boicourt, R. M.
The NJOY Nuclear Data Processing System, version 2016, is a comprehensive computer code package for producing pointwise and multigroup cross sections and related quantities from evaluated nuclear data in the ENDF-4 through ENDF-6 legacy card-image formats. NJOY works with evaluated files for incident neutrons, photons, and charged particles, producing libraries for a wide variety of particle transport and reactor analysis codes.
Three-Dimensional Computer Graphics Brain-Mapping Project.
1987-03-15
NEUROQUANT . This package was directed towards quantitative microneuroanatomic data acquisition and analysis. Using this interface, image frames captured...populations of brains. This would have been aprohibitive task if done manually with a densitometer and film, due to user error and bias. NEUROQUANT functioned...of cells were of interest. NEUROQUANT is presently being implemented with a more fully automatic method of localizing the cell bodies directly
Flightspeed Integral Image Analysis Toolkit
NASA Technical Reports Server (NTRS)
Thompson, David R.
2009-01-01
The Flightspeed Integral Image Analysis Toolkit (FIIAT) is a C library that provides image analysis functions in a single, portable package. It provides basic low-level filtering, texture analysis, and subwindow descriptor for applications dealing with image interpretation and object recognition. Designed with spaceflight in mind, it addresses: Ease of integration (minimal external dependencies) Fast, real-time operation using integer arithmetic where possible (useful for platforms lacking a dedicated floatingpoint processor) Written entirely in C (easily modified) Mostly static memory allocation 8-bit image data The basic goal of the FIIAT library is to compute meaningful numerical descriptors for images or rectangular image regions. These n-vectors can then be used directly for novelty detection or pattern recognition, or as a feature space for higher-level pattern recognition tasks. The library provides routines for leveraging training data to derive descriptors that are most useful for a specific data set. Its runtime algorithms exploit a structure known as the "integral image." This is a caching method that permits fast summation of values within rectangular regions of an image. This integral frame facilitates a wide range of fast image-processing functions. This toolkit has applicability to a wide range of autonomous image analysis tasks in the space-flight domain, including novelty detection, object and scene classification, target detection for autonomous instrument placement, and science analysis of geomorphology. It makes real-time texture and pattern recognition possible for platforms with severe computational restraints. The software provides an order of magnitude speed increase over alternative software libraries currently in use by the research community. FIIAT can commercially support intelligent video cameras used in intelligent surveillance. It is also useful for object recognition by robots or other autonomous vehicles
Hypertext-based computer vision teaching packages
NASA Astrophysics Data System (ADS)
Marshall, A. David
1994-10-01
The World Wide Web Initiative has provided a means for providing hypertext and multimedia based information across the whole INTERNET. Many applications have been developed on such http servers. At Cardiff we have developed a http hypertext based multimedia server, the Cardiff Information Server, using the widely available Mosaic system. The server provides a variety of information ranging from the provision of teaching modules, on- line documentation, timetables for departmental activities to more light hearted hobby interests. One important and novel development to the server has been the development of courseware facilities. This ranges from the provision of on-line lecture notes, exercises and their solutions to more interactive teaching packages. A variety of disciplines have benefitted notably Computer Vision, and Image Processing but also C programming, X Windows, Computer Graphics and Parallel Computing. This paper will address the issues of the implementation of the Computer Vision and Image Processing packages, the advantages gained from using a hypertext based system and also will relate practical experiences of using the packages in a class environment. The paper addresses issues of how best to provide information in such a hypertext based system and how interactive image processing packages can be developed and integrated into courseware. The suite of tools developed facilitates a flexible and powerful courseware package that has proved popular in the classroom and over the Internet. The paper will also detail many future developments we see possible. One of the key points raised in the paper is that Mosaic's hypertext language (html) is extremely powerful and yet relatively straightforward to use. It is also possible to link in Unix calls so that programs and shells can be executed. This provides a powerful suite of utilities that can be exploited to develop many packages.
An IDL-based analysis package for COBE and other skycube-formatted astronomical data
NASA Technical Reports Server (NTRS)
Ewing, J. A.; Isaacman, Richard B.; Gales, J. M.
1992-01-01
UIMAGE is a data analysis package written in IDL for the Cosmic Background Explorer (COBE) project. COBE has extraordinarily stringent accuracy requirements: 1 percent mid-infrared absolute photometry, 0.01 percent submillimeter absolute spectrometry, and 0.0001 percent submillimeter relative photometry. Thus, many of the transformations and image enhancements common to analysis of large data sets must be done with special care. UIMAGE is unusual in this sense in that it performs as many of its operations as possible on the data in its native format and projection, which in the case of COBE is the quadrilateralized sphereical cube ('skycube'). That is, after reprojecting the data, e.g., onto an Aitoff map, the user who performs an operation such as taking a crosscut or extracting data from a pixel is transparently acting upon the skycube data from which the projection was made, thereby preserving the accuracy of the result. Current plans call for formatting external data bases such as CO maps into the skycube format with a high-accuracy transformation, thereby allowing Guest Investigators to use UIMAGE for direct comparison of the COBE maps with those at other wavelengths from other instruments. It is completely menu-driven so that its use requires no knowledge of IDL. Its functionality includes I/O from the COBE archives, FITS files, and IDL save sets as well as standard analysis operations such as smoothing, reprojection, zooming, statistics of areas, spectral analysis, etc. One of UIMAGE's more advanced and attractive features is its terminal independence. Most of the operations (e.g., menu-item selection or pixel selection) that are driven by the mouse on an X-windows terminal are also available using arrow keys and keyboard entry (e.g., pixel coordinates) on VT200 and Tektronix-class terminals. Even limited grey scales of images are available this way. Obviously, image processing is very limited on this type of terminal, but it is nonetheless surprising how much analysis can be done on that medium. Such flexibility has the virtue of expanding the user community to those who must work remotely on non-image terminals, e.g., via modem.
Camargo, Anyela; Papadopoulou, Dimitra; Spyropoulou, Zoi; Vlachonasios, Konstantinos; Doonan, John H; Gay, Alan P
2014-01-01
Computer-vision based measurements of phenotypic variation have implications for crop improvement and food security because they are intrinsically objective. It should be possible therefore to use such approaches to select robust genotypes. However, plants are morphologically complex and identification of meaningful traits from automatically acquired image data is not straightforward. Bespoke algorithms can be designed to capture and/or quantitate specific features but this approach is inflexible and is not generally applicable to a wide range of traits. In this paper, we have used industry-standard computer vision techniques to extract a wide range of features from images of genetically diverse Arabidopsis rosettes growing under non-stimulated conditions, and then used statistical analysis to identify those features that provide good discrimination between ecotypes. This analysis indicates that almost all the observed shape variation can be described by 5 principal components. We describe an easily implemented pipeline including image segmentation, feature extraction and statistical analysis. This pipeline provides a cost-effective and inherently scalable method to parameterise and analyse variation in rosette shape. The acquisition of images does not require any specialised equipment and the computer routines for image processing and data analysis have been implemented using open source software. Source code for data analysis is written using the R package. The equations to calculate image descriptors have been also provided.
The connectome mapper: an open-source processing pipeline to map connectomes with MRI.
Daducci, Alessandro; Gerhard, Stephan; Griffa, Alessandra; Lemkaddem, Alia; Cammoun, Leila; Gigandet, Xavier; Meuli, Reto; Hagmann, Patric; Thiran, Jean-Philippe
2012-01-01
Researchers working in the field of global connectivity analysis using diffusion magnetic resonance imaging (MRI) can count on a wide selection of software packages for processing their data, with methods ranging from the reconstruction of the local intra-voxel axonal structure to the estimation of the trajectories of the underlying fibre tracts. However, each package is generally task-specific and uses its own conventions and file formats. In this article we present the Connectome Mapper, a software pipeline aimed at helping researchers through the tedious process of organising, processing and analysing diffusion MRI data to perform global brain connectivity analyses. Our pipeline is written in Python and is freely available as open-source at www.cmtk.org.
A Freeware Path to Neutron Computed Tomography
NASA Astrophysics Data System (ADS)
Schillinger, Burkhard; Craft, Aaron E.
Neutron computed tomography has become a routine method at many neutron sources due to the availability of digital detection systems, powerful computers and advanced software. The commercial packages Octopus by Inside Matters and VGStudio by Volume Graphics have been established as a quasi-standard for high-end computed tomography. However, these packages require a stiff investment and are available to the users only on-site at the imaging facility to do their data processing. There is a demand from users to have image processing software at home to do further data processing; in addition, neutron computed tomography is now being introduced even at smaller and older reactors. Operators need to show a first working tomography setup before they can obtain a budget to build an advanced tomography system. Several packages are available on the web for free; however, these have been developed for X-rays or synchrotron radiation and are not immediately useable for neutron computed tomography. Three reconstruction packages and three 3D-viewers have been identified and used even for Gigabyte datasets. This paper is not a scientific publication in the classic sense, but is intended as a review to provide searchable help to make the described packages usable for the tomography community. It presents the necessary additional preprocessing in ImageJ, some workarounds for bugs in the software, and undocumented or badly documented parameters that need to be adapted for neutron computed tomography. The result is a slightly complicated, but surprisingly high-quality path to neutron computed tomography images in 3D, but not a replacement for the even more powerful commercial software mentioned above.
BioImg.org: A Catalog of Virtual Machine Images for the Life Sciences
Dahlö, Martin; Haziza, Frédéric; Kallio, Aleksi; Korpelainen, Eija; Bongcam-Rudloff, Erik; Spjuth, Ola
2015-01-01
Virtualization is becoming increasingly important in bioscience, enabling assembly and provisioning of complete computer setups, including operating system, data, software, and services packaged as virtual machine images (VMIs). We present an open catalog of VMIs for the life sciences, where scientists can share information about images and optionally upload them to a server equipped with a large file system and fast Internet connection. Other scientists can then search for and download images that can be run on the local computer or in a cloud computing environment, providing easy access to bioinformatics environments. We also describe applications where VMIs aid life science research, including distributing tools and data, supporting reproducible analysis, and facilitating education. BioImg.org is freely available at: https://bioimg.org. PMID:26401099
BioImg.org: A Catalog of Virtual Machine Images for the Life Sciences.
Dahlö, Martin; Haziza, Frédéric; Kallio, Aleksi; Korpelainen, Eija; Bongcam-Rudloff, Erik; Spjuth, Ola
2015-01-01
Virtualization is becoming increasingly important in bioscience, enabling assembly and provisioning of complete computer setups, including operating system, data, software, and services packaged as virtual machine images (VMIs). We present an open catalog of VMIs for the life sciences, where scientists can share information about images and optionally upload them to a server equipped with a large file system and fast Internet connection. Other scientists can then search for and download images that can be run on the local computer or in a cloud computing environment, providing easy access to bioinformatics environments. We also describe applications where VMIs aid life science research, including distributing tools and data, supporting reproducible analysis, and facilitating education. BioImg.org is freely available at: https://bioimg.org.
Array Technology for Terahertz Imaging
NASA Technical Reports Server (NTRS)
Reck, Theodore; Siles, Jose; Jung, Cecile; Gill, John; Lee, Choonsup; Chattopadhyay, Goutam; Mehdi, Imran; Cooper, Ken
2012-01-01
Heterodyne terahertz (0.3 - 3THz) imaging systems are currently limited to single or a low number of pixels. Drastic improvements in imaging sensitivity and speed can be achieved by replacing single pixel systems with an array of detectors. This paper presents an array topology that is being developed at the Jet Propulsion Laboratory based on the micromachining of silicon. This technique fabricates the array's package and waveguide components by plasma etching of silicon, resulting in devices with precision surpassing that of current metal machining techniques. Using silicon increases the versatility of the packaging, enabling a variety of orientations of circuitry within the device which increases circuit density and design options. The design of a two-pixel transceiver utilizing a stacked architecture is presented that achieves a pixel spacing of 10mm. By only allowing coupling from the top and bottom of the package the design can readily be arrayed in two dimensions with a spacing of 10mm x 18mm.
Defect Inspection of Flip Chip Solder Bumps Using an Ultrasonic Transducer
Su, Lei; Shi, Tielin; Xu, Zhensong; Lu, Xiangning; Liao, Guanglan
2013-01-01
Surface mount technology has spurred a rapid decrease in the size of electronic packages, where solder bump inspection of surface mount packages is crucial in the electronics manufacturing industry. In this study we demonstrate the feasibility of using a 230 MHz ultrasonic transducer for nondestructive flip chip testing. The reflected time domain signal was captured when the transducer scanning the flip chip, and the image of the flip chip was generated by scanning acoustic microscopy. Normalized cross-correlation was used to locate the center of solder bumps for segmenting the flip chip image. Then five features were extracted from the signals and images. The support vector machine was adopted to process the five features for classification and recognition. The results show the feasibility of this approach with high recognition rate, proving that defect inspection of flip chip solder bumps using the ultrasonic transducer has high potential in microelectronics packaging.
WE-FG-202-12: Investigation of Longitudinal Salivary Gland DCE-MRI Changes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ger, R; Howell, R; Li, H
Purpose: To determine the correlation between dose and changes through treatment in dynamic contrast enhanced (DCE) MRI voxel parameters (Ktrans, kep, Ve, and Vp) within salivary glands of head and neck oropharyngeal squamous cell carcinoma (HNSCC) patients. Methods: 17 HNSCC patients treated with definitive radiation therapy completed DCE-MRI scans on a 3T scanner at pre-treatment, mid-treatment, and post-treatment time points. Mid-treatment and post-treatment DCE images were deformably registered to pre-treatment DCE images (Velocity software package). Pharmacokinetic analysis of the DCE images used a modified Tofts model to produce parameter maps with an arterial input function selected from each patient’s perivertebralmore » space on the image (NordicICE software package). In-house software was developed for voxel-by-voxel longitudinal analysis of the salivary glands within the registered images. The planning CT was rigidly registered to the pre-treatment DCE image to obtain dose values in each voxel. Voxels within the lower and upper dose quartiles for each gland were averaged for each patient, then an average of the patients’ means for the two quartiles were compared. Dose-relationships were also assessed by Spearman correlations between dose and voxel parameter changes for each patient’s gland. Results: Changes in parameters’ means between time points were observed, but inter-patient variability was high. Ve of the parotid was the only parameter that had a consistently significant longitudinal difference between dose quartiles. The highest Spearman correlation was Vp of the sublingual gland for the change in the pre-treatment to mid-treatment values with only a ρ=0.29. Conclusion: In this preliminary study, there was large inter-patient variability in the changes of DCE voxel parameters with no clear relationship with dose. Additional patients may reduce the uncertainties and allow for the determination of the existence of parameter and dose relationships.« less
Automated processing of webcam images for phenological classification.
Bothmann, Ludwig; Menzel, Annette; Menze, Bjoern H; Schunk, Christian; Kauermann, Göran
2017-01-01
Along with the global climate change, there is an increasing interest for its effect on phenological patterns such as start and end of the growing season. Scientific digital webcams are used for this purpose taking every day one or more images from the same natural motive showing for example trees or grassland sites. To derive phenological patterns from the webcam images, regions of interest are manually defined on these images by an expert and subsequently a time series of percentage greenness is derived and analyzed with respect to structural changes. While this standard approach leads to satisfying results and allows to determine dates of phenological change points, it is associated with a considerable amount of manual work and is therefore constrained to a limited number of webcams only. In particular, this forbids to apply the phenological analysis to a large network of publicly accessible webcams in order to capture spatial phenological variation. In order to be able to scale up the analysis to several hundreds or thousands of webcams, we propose and evaluate two automated alternatives for the definition of regions of interest, allowing for efficient analyses of webcam images. A semi-supervised approach selects pixels based on the correlation of the pixels' time series of percentage greenness with a few prototype pixels. An unsupervised approach clusters pixels based on scores of a singular value decomposition. We show for a scientific webcam that the resulting regions of interest are at least as informative as those chosen by an expert with the advantage that no manual action is required. Additionally, we show that the methods can even be applied to publicly available webcams accessed via the internet yielding interesting partitions of the analyzed images. Finally, we show that the methods are suitable for the intended big data applications by analyzing 13988 webcams from the AMOS database. All developed methods are implemented in the statistical software package R and publicly available in the R package phenofun. Executable example code is provided as supplementary material.
Automated processing of webcam images for phenological classification
Bothmann, Ludwig; Menzel, Annette; Menze, Bjoern H.; Schunk, Christian; Kauermann, Göran
2017-01-01
Along with the global climate change, there is an increasing interest for its effect on phenological patterns such as start and end of the growing season. Scientific digital webcams are used for this purpose taking every day one or more images from the same natural motive showing for example trees or grassland sites. To derive phenological patterns from the webcam images, regions of interest are manually defined on these images by an expert and subsequently a time series of percentage greenness is derived and analyzed with respect to structural changes. While this standard approach leads to satisfying results and allows to determine dates of phenological change points, it is associated with a considerable amount of manual work and is therefore constrained to a limited number of webcams only. In particular, this forbids to apply the phenological analysis to a large network of publicly accessible webcams in order to capture spatial phenological variation. In order to be able to scale up the analysis to several hundreds or thousands of webcams, we propose and evaluate two automated alternatives for the definition of regions of interest, allowing for efficient analyses of webcam images. A semi-supervised approach selects pixels based on the correlation of the pixels’ time series of percentage greenness with a few prototype pixels. An unsupervised approach clusters pixels based on scores of a singular value decomposition. We show for a scientific webcam that the resulting regions of interest are at least as informative as those chosen by an expert with the advantage that no manual action is required. Additionally, we show that the methods can even be applied to publicly available webcams accessed via the internet yielding interesting partitions of the analyzed images. Finally, we show that the methods are suitable for the intended big data applications by analyzing 13988 webcams from the AMOS database. All developed methods are implemented in the statistical software package R and publicly available in the R package phenofun. Executable example code is provided as supplementary material. PMID:28235092
Quantification of root gravitropic response using a constant stimulus feedback system.
Wolverton, Chris
2015-01-01
Numerous software packages now exist for quantifying root growth responses, most of which analyze a time resolved sequence of images ex post facto. However, few allow for the real-time analysis of growth responses. The system in routine use in our lab allows for real-time growth analysis and couples this to positional feedback to control the stimulus experienced by the responding root. This combination allows us to overcome one of the confounding variables in studies of root gravity response. Seedlings are grown on standard petri plates attached to a vertical rotating stage and imaged using infrared illumination. The angle of a particular region of the root is determined by image analysis, compared to the prescribed angle, and any corrections in positioning are made by controlling a stepper motor. The system allows for the long-term stimulation of a root at a constant angle and yields insights into the gravity perception and transduction machinery not possible with other approaches.
NASA Astrophysics Data System (ADS)
Witzel, Gunther; Lu, Jessica R.; Ghez, Andrea M.; Martinez, Gregory D.; Fitzgerald, Michael P.; Britton, Matthew; Sitarski, Breann N.; Do, Tuan; Campbell, Randall D.; Service, Maxwell; Matthews, Keith; Morris, Mark R.; Becklin, E. E.; Wizinowich, Peter L.; Ragland, Sam; Doppmann, Greg; Neyman, Chris; Lyke, James; Kassis, Marc; Rizzi, Luca; Lilley, Scott; Rampy, Rachel
2016-07-01
General relativity can be tested in the strong gravity regime by monitoring stars orbiting the supermassive black hole at the Galactic Center with adaptive optics. However, the limiting source of uncertainty is the spatial PSF variability due to atmospheric anisoplanatism and instrumental aberrations. The Galactic Center Group at UCLA has completed a project developing algorithms to predict PSF variability for Keck AO images. We have created a new software package (AIROPA), based on modified versions of StarFinder and Arroyo, that takes atmospheric turbulence profiles, instrumental aberration maps, and images as inputs and delivers improved photometry and astrometry on crowded fields. This software package will be made publicly available soon.
NASA Astrophysics Data System (ADS)
Lelièvre, Peter G.; Grey, Melissa
2017-08-01
Quantitative morphometric analyses of form are widely used in palaeontology, especially for taxonomic and evolutionary research. These analyses can involve several measurements performed on hundreds or even thousands of samples. Performing measurements of size and shape on large assemblages of macro- or microfossil samples is generally infeasible or impossible with traditional instruments such as vernier calipers. Instead, digital image processing software is required to perform measurements via suitable digital images of samples. Many software packages exist for morphometric analyses but there is not much available for the integral stage of data collection, particularly for the measurement of the outlines of samples. Some software exists to automatically detect the outline of a fossil sample from a digital image. However, automatic outline detection methods may perform inadequately when samples have incomplete outlines or images contain poor contrast between the sample and staging background. Hence, a manual digitization approach may be the only option. We are not aware of any software packages that are designed specifically for efficient digital measurement of fossil assemblages with numerous samples, especially for the purposes of manual outline analysis. Throughout several previous studies, we have developed a new software tool, JMorph, that is custom-built for that task. JMorph provides the means to perform many different types of measurements, which we describe in this manuscript. We focus on JMorph's ability to rapidly and accurately digitize the outlines of fossils. JMorph is freely available from the authors.
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Bassari, Jinous; Triantafyllopoulos, Spiros
1984-01-01
The University of Southwestern Louisiana (USL) NASA PC R and D statistical analysis support package is designed to be a three-level package to allow statistical analysis for a variety of applications within the USL Data Base Management System (DBMS) contract work. The design addresses usage of the statistical facilities as a library package, as an interactive statistical analysis system, and as a batch processing package.
NASA Astrophysics Data System (ADS)
Ahi, Kiarash; Anwar, Mehdi
2016-04-01
This paper introduces a novel reconstruction approach for enhancing the resolution of the terahertz (THz) images. For this purpose the THz imaging equation is derived. According to our best knowledge we are reporting the first THz imaging equation by this paper. This imaging equation is universal for THz far-field imaging systems and can be used for analyzing, describing and modeling of these systems. The geometry and behavior of Gaussian beams in far-field region imply that the FWHM of the THz beams diverge as the frequencies of the beams decrease. Thus, the resolution of the measurement decreases in lower frequencies. On the other hand, the depth of penetration of THz beams decreases as frequency increases. Roughly speaking beams in sub 1.5 THz, are transmitted into integrated circuit (IC) packages and the similar packaged objects. Thus, it is not possible to use the THz pulse with higher frequencies in order to achieve higher resolution inspection of packaged items. In this paper, after developing the 3-D THz point spread function (PSF) of the scanning THz beam and then the THz imaging equation, THz images are enhanced through deconvolution of the THz PSF and THz images. As a result, the resolution has been improved several times beyond the physical limitations of the THz measurement setup in the far-field region and sub-Nyquist images have been achieved. Particularly, MSE and SSIḾ have been increased by 27% and 50% respectively. Details as small as 0.2 mm were made visible in the THz images which originally reveals no details smaller than 2.2 mm. In other words the resolution of the images has been increased by 10 times. The accuracy of the reconstructed images was proved by high resolution X-ray images.
Muir, Dylan R; Kampa, Björn M
2014-01-01
Two-photon calcium imaging of neuronal responses is an increasingly accessible technology for probing population responses in cortex at single cell resolution, and with reasonable and improving temporal resolution. However, analysis of two-photon data is usually performed using ad-hoc solutions. To date, no publicly available software exists for straightforward analysis of stimulus-triggered two-photon imaging experiments. In addition, the increasing data rates of two-photon acquisition systems imply increasing cost of computing hardware required for in-memory analysis. Here we present a Matlab toolbox, FocusStack, for simple and efficient analysis of two-photon calcium imaging stacks on consumer-level hardware, with minimal memory footprint. We also present a Matlab toolbox, StimServer, for generation and sequencing of visual stimuli, designed to be triggered over a network link from a two-photon acquisition system. FocusStack is compatible out of the box with several existing two-photon acquisition systems, and is simple to adapt to arbitrary binary file formats. Analysis tools such as stack alignment for movement correction, automated cell detection and peri-stimulus time histograms are already provided, and further tools can be easily incorporated. Both packages are available as publicly-accessible source-code repositories.
Muir, Dylan R.; Kampa, Björn M.
2015-01-01
Two-photon calcium imaging of neuronal responses is an increasingly accessible technology for probing population responses in cortex at single cell resolution, and with reasonable and improving temporal resolution. However, analysis of two-photon data is usually performed using ad-hoc solutions. To date, no publicly available software exists for straightforward analysis of stimulus-triggered two-photon imaging experiments. In addition, the increasing data rates of two-photon acquisition systems imply increasing cost of computing hardware required for in-memory analysis. Here we present a Matlab toolbox, FocusStack, for simple and efficient analysis of two-photon calcium imaging stacks on consumer-level hardware, with minimal memory footprint. We also present a Matlab toolbox, StimServer, for generation and sequencing of visual stimuli, designed to be triggered over a network link from a two-photon acquisition system. FocusStack is compatible out of the box with several existing two-photon acquisition systems, and is simple to adapt to arbitrary binary file formats. Analysis tools such as stack alignment for movement correction, automated cell detection and peri-stimulus time histograms are already provided, and further tools can be easily incorporated. Both packages are available as publicly-accessible source-code repositories1. PMID:25653614
On-line 3-dimensional confocal imaging in vivo.
Li, J; Jester, J V; Cavanagh, H D; Black, T D; Petroll, W M
2000-09-01
In vivo confocal microscopy through focusing (CMTF) can provide a 3-D stack of high-resolution corneal images and allows objective measurements of corneal sublayer thickness and backscattering. However, current systems require time-consuming off-line image processing and analysis on multiple software platforms. Furthermore, there is a trade off between the CMTF speed and measurement precision. The purpose of this study was to develop a novel on-line system for in vivo corneal imaging and analysis that overcomes these limitations. A tandem scanning confocal microscope (TSCM) was used for corneal imaging. The TSCM video camera was interfaced directly to a PC image acquisition board to implement real-time digitization. Software was developed to allow in vivo 2-D imaging, CMTF image acquisition, interactive 3-D reconstruction, and analysis of CMTF data to be performed on line in a single user-friendly environment. A procedure was also incorporated to separate the odd/even video fields, thereby doubling the CMTF sampling rate and theoretically improving the precision of CMTF thickness measurements by a factor of two. In vivo corneal examinations of a normal human and a photorefractive keratectomy patient are presented to demonstrate the capabilities of the new system. Improvements in the convenience, speed, and functionality of in vivo CMTF image acquisition, display, and analysis are demonstrated. This is the first full-featured software package designed for in vivo TSCM imaging of the cornea, which performs both 2-D and 3-D image acquisition, display, and processing as well as CMTF analysis. The use of a PC platform and incorporation of easy to use, on line, and interactive features should help to improve the clinical utility of this technology.
Design study of the accessible focal plane telescope for shuttle
NASA Technical Reports Server (NTRS)
1976-01-01
The design and cost analysis of an accessible focal plane telescope for Spacelab is presented in blueprints, tables, and graphs. Topics covered include the telescope tube, the telescope mounting, the airlock plus Spacelab module aft plate, the instrument adapter, and the instrument package. The system allows access to the image plane with instrumentation that can be operated by a scientist in a shirt sleeve environment inside a Spacelab module.
46th Annual Gun and Missile Systems Conference and Exhibition. Volume 2. Wednesday
2011-09-01
military/systems/munitions/images/ Page 7 Designing for Operational Challenges Gun hardening – Multiple charges • Angular acceleration variation ...The industrial base overestimated readiness at SDD start – Analysis/models were naive • Impulsive loads — pressure variation — SOM under impulse...Manufacture and Producibility Branch, US Army Armament Research, Development and Engineering Center • Alan Sweet and William Goldberg , Packaging Division
Scheffels, Janne; Lund, Ingeborg
2017-01-01
Objectives Snus use has increased among youth in Norway in recent years and is now more prevalent than smoking. Concurrently, a range of new products and package designs have been introduced to the market. The aim of this study was to explore how youth perceive snus branding and package design, and the role, if any, of snus packaging on perceptions of appeal and harm of snus among youth. Participants Adolescent tobacco users and non-users (N=35) ages 15–17 years. Design We conducted interviews among 6 focus groups (each with 4–7 participants). Participants were shown snus packages with a variety of designs and with different product qualities (flavour additives, slim, regular, white and brown sachets) and group discussions focused on how they perceived packages and products. The focus group discussions were semistructured using a standard guide, and analysed thematically. Results The participants in the focus groups narrated distinct images of snus brands and associated user identities. Package design elements such as shapes, colours, images and fonts were described as guiding these perceptions. Packaging elements and flavour additives were associated with perceptions of product harm. The appeal of flavoured snus products and new types of snus sachets seemed to blend in with these processes, reinforcing positive attitudes and contributing to the creation of particular identities for products and their users. Conclusions The findings indicate that packaging is vital to consumer's identification with, and differentiation between, snus brands. In view of this, snus branding and packaging can be seen as fulfilling a similar promotional role as advertising messages: generating preferences and appeal. PMID:28373248
Scheffels, Janne; Lund, Ingeborg
2017-04-03
Snus use has increased among youth in Norway in recent years and is now more prevalent than smoking. Concurrently, a range of new products and package designs have been introduced to the market. The aim of this study was to explore how youth perceive snus branding and package design, and the role, if any, of snus packaging on perceptions of appeal and harm of snus among youth. Adolescent tobacco users and non-users (N=35) ages 15-17 years. We conducted interviews among 6 focus groups (each with 4-7 participants). Participants were shown snus packages with a variety of designs and with different product qualities (flavour additives, slim, regular, white and brown sachets) and group discussions focused on how they perceived packages and products. The focus group discussions were semistructured using a standard guide, and analysed thematically. The participants in the focus groups narrated distinct images of snus brands and associated user identities. Package design elements such as shapes, colours, images and fonts were described as guiding these perceptions. Packaging elements and flavour additives were associated with perceptions of product harm. The appeal of flavoured snus products and new types of snus sachets seemed to blend in with these processes, reinforcing positive attitudes and contributing to the creation of particular identities for products and their users. The findings indicate that packaging is vital to consumer's identification with, and differentiation between, snus brands. In view of this, snus branding and packaging can be seen as fulfilling a similar promotional role as advertising messages: generating preferences and appeal. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
A qualitative study of children's snack food packaging perceptions and preferences.
Letona, Paola; Chacon, Violeta; Roberto, Christina; Barnoya, Joaquin
2014-12-15
Food marketing is pervasive in high- and low/middle-income countries and is recognized as a significant risk factor for childhood obesity. Although food packaging is one of the most important marketing tools to persuade consumers at the point-of-sale, scant research has examined how it influences children's perceptions. This study was conducted in Guatemala and aimed to understand which snack foods are the most frequently purchased by children and how aspects of food packaging influence their product perceptions. Six activity-based focus groups were conducted in two elementary public schools with thirty-seven children (Grades 1 through 6, age range 7-12 years old). During each focus group, children participated in three activities: 1) list their most frequently purchased food products; 2) select the picture of their favorite product, the packaging they liked best, and the product they thought was the healthiest from eight choices; and 3) draw the package of a new snack. Children reported purchasing salty snacks most frequently. Most children chose their favorite product based on taste perceptions, which can be influenced by food packaging. Visual elements influenced children's selection of favorite packaging (i.e., characters, colors) and healthiest product (i.e., images), and persuaded some children to incorrectly think certain foods contained healthy ingredients. When children generated their own drawings of a new product, the most frequently included packaging elements in the drawings were product name, price, product image and characters, suggesting those aspects of the food packaging were most significant to them. Policies regulating package content and design are required to discourage consumption of unhealthy snacks. This might be another public health strategy that can aid to halt the obesity epidemic.
Constructing Benchmark Databases and Protocols for Medical Image Analysis: Diabetic Retinopathy
Kauppi, Tomi; Kämäräinen, Joni-Kristian; Kalesnykiene, Valentina; Sorri, Iiris; Uusitalo, Hannu; Kälviäinen, Heikki
2013-01-01
We address the performance evaluation practices for developing medical image analysis methods, in particular, how to establish and share databases of medical images with verified ground truth and solid evaluation protocols. Such databases support the development of better algorithms, execution of profound method comparisons, and, consequently, technology transfer from research laboratories to clinical practice. For this purpose, we propose a framework consisting of reusable methods and tools for the laborious task of constructing a benchmark database. We provide a software tool for medical image annotation helping to collect class label, spatial span, and expert's confidence on lesions and a method to appropriately combine the manual segmentations from multiple experts. The tool and all necessary functionality for method evaluation are provided as public software packages. As a case study, we utilized the framework and tools to establish the DiaRetDB1 V2.1 database for benchmarking diabetic retinopathy detection algorithms. The database contains a set of retinal images, ground truth based on information from multiple experts, and a baseline algorithm for the detection of retinopathy lesions. PMID:23956787
Young adults' interpretations of tobacco brands: implications for tobacco control.
Gendall, Philip; Hoek, Janet; Thomson, George; Edwards, Richard; Pene, Gina; Gifford, Heather; Pirikahu, Gill; McCool, Judith
2011-10-01
Marketers have long recognized the power and importance of branding, which creates aspirational attributes that increase products' attractiveness. Although brand imagery has traditionally been communicated via mass media, packaging's importance in promoting desirable brand-attribute associations has increased. Knowledge of how groups prone to smoking experimentation interpret tobacco branding would inform the debate over plain packaging currently occurring in many countries. We conducted 12 group discussions and four in-depth interviews with 66 young adult smokers and nonsmokers of varying ethnicities from two larger New Zealand cities and one provincial city. Participants evaluated 10 familiar and unfamiliar tobacco brands using brand personality attributes and discussed the associations they had made. Participants ascribed very different images to different brands when exposed to the packaging alone, regardless of whether they had seen or heard of the brands before. Perceptual mapping of brands and image attributes highlighted how brand positions varied from older, more traditional, and male oriented to younger, feminine, and "cool." Our findings emphasize the continuing importance of tobacco branding as a promotion tool, even when communicated only by packaging. The ease with which packaging alone enabled young people to identify brand attributes and the desirable associations these connoted illustrate how tobacco packaging functions as advertising. The results support measures such as plain packaging of tobacco products to reduce exposure to these overt behavioral cues.
NASA Astrophysics Data System (ADS)
Iltis, G.; Caswell, T. A.; Dill, E.; Wilkins, S.; Lee, W. K.
2014-12-01
X-ray tomographic imaging of porous media has proven to be a valuable tool for investigating and characterizing the physical structure and state of both natural and synthetic porous materials, including glass bead packs, ceramics, soil and rock. Given that most synchrotron facilities have user programs which grant academic researchers access to facilities and x-ray imaging equipment free of charge, a key limitation or hindrance for small research groups interested in conducting x-ray imaging experiments is the financial cost associated with post-experiment data analysis. While the cost of high performance computing hardware continues to decrease, expenses associated with licensing commercial software packages for quantitative image analysis continue to increase, with current prices being as high as $24,000 USD, for a single user license. As construction of the Nation's newest synchrotron accelerator nears completion, a significant effort is being made here at the National Synchrotron Light Source II (NSLS-II), Brookhaven National Laboratory (BNL), to provide an open-source, experiment-to-publication toolbox that reduces the financial and technical 'activation energy' required for performing sophisticated quantitative analysis of multidimensional porous media data sets, collected using cutting-edge x-ray imaging techniques. Implementation focuses on leveraging existing open-source projects and developing additional tools for quantitative analysis. We will present an overview of the software suite that is in development here at BNL including major design decisions, a demonstration of several test cases illustrating currently available quantitative tools for analysis and characterization of multidimensional porous media image data sets and plans for their future development.
Baldasso, Rosane Pérez; Tinoco, Rachel Lima Ribeiro; Vieira, Cristina Saft Matos; Fernandes, Mário Marques; Oliveira, Rogério Nogueira
2016-10-01
The process of forensic facial analysis may be founded on several scientific techniques and imaging modalities, such as digital signal processing, photogrammetry and craniofacial anthropometry. However, one of the main limitations in this analysis is the comparison of images acquired with different angles of incidence. The present study aimed to explore a potential approach for the correction of the planar perspective projection (PPP) in geometric structures traced from the human face. A technique for the correction of the PPP was calibrated within photographs of two geometric structures obtained with angles of incidence distorted in 80°, 60° and 45°. The technique was performed using ImageJ ® 1.46r (National Institutes of Health, Bethesda, Maryland). The corrected images were compared with photographs of the same object obtained in 90° (reference). In a second step, the technique was validated in a digital human face created using MakeHuman ® 1.0.2 (Free Software Foundation, Massachusetts, EUA) and Blender ® 2.75 (Blender ® Foundation, Amsterdam, Nederland) software packages. The images registered with angular distortion presented a gradual decrease in height when compared to the reference. The digital technique for the correction of the PPP is a valuable tool for forensic applications using photographic imaging modalities, such as forensic facial analysis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Analysis of XMM-Newton Data from Extended Sources and the Diffuse X-Ray Background
NASA Technical Reports Server (NTRS)
Snowden, Steven
2011-01-01
Reduction of X-ray data from extended objects and the diffuse background is a complicated process that requires attention to the details of the instrumental response as well as an understanding of the multiple background components. We present methods and software that we have developed to reduce data from XMM-Newton EPIC imaging observations for both the MOS and PN instruments. The software has now been included in the Science Analysis System (SAS) package available through the XMM-Newton Science Operations Center (SOC).
New space sensor and mesoscale data analysis
NASA Technical Reports Server (NTRS)
Hickey, John S.
1987-01-01
The developed Earth Science and Application Division (ESAD) system/software provides the research scientist with the following capabilities: an extensive data base management capibility to convert various experiment data types into a standard format; and interactive analysis and display package (AVE80); an interactive imaging/color graphics capability utilizing the Apple III and IBM PC workstations integrated into the ESAD computer system; and local and remote smart-terminal capability which provides color video, graphics, and Laserjet output. Recommendations for updating and enhancing the performance of the ESAD computer system are listed.
SolarSoft Desat Package for the Recovery of Saturated AIA Flare Images
NASA Astrophysics Data System (ADS)
Schwartz, Richard Alan; Torre, Gabriele; Piana, Michele; Massone, AnnaMaria
2015-04-01
The dynamic range of EUV images has been limited by the problem of CCD saturation as seen countless times in movies of solare flares made using the Solar Dynamics Observatory’s Atmospheric Imaging Assembly (SDO AIA). Concurrent with the saturation are the eight rays emanating from the saturation locus which are the result of diffraction off the wire meshes that support the EUV passband filters. This is the problem and its solution in a nutshell. By utilizing techniques similar to those used for making images from the rotating modulation collimators on the Ramaty High Energy Solar Spectroscopic Imager (RHESSI) we have developed a software package that can be used to make images of the EUV flare kernels in a highly automated way as described in Schwartz et al. (2014). Starting from cutouts centered around a flaring region, the software uses the point-spread-function (PSF) of the diffraction pattern to identify and reconstruct the region of the primary saturation. The software also uses the best information available to reconstruct the general scene obscured from overflow saturation and subtracts away the diffraction fringes. It is not a total correction for the PSF but is meant to provide the flare images above all. The software is freely available and distributed within the DESAT package of Solar Software.(Schwartz, R. A., Torre, G., & Piana, M. (2014), Astrophysical Journal Letters, 793, LL23 )
Research and Development of Fully Automatic Alien Smoke Stack and Packaging System
NASA Astrophysics Data System (ADS)
Yang, Xudong; Ge, Qingkuan; Peng, Tao; Zuo, Ping; Dong, Weifu
2017-12-01
The problem of low efficiency of manual sorting packaging for the current tobacco distribution center, which developed a set of safe efficient and automatic type of alien smoke stack and packaging system. The functions of fully automatic alien smoke stack and packaging system adopt PLC control technology, servo control technology, robot technology, image recognition technology and human-computer interaction technology. The characteristics, principles, control process and key technology of the system are discussed in detail. Through the installation and commissioning fully automatic alien smoke stack and packaging system has a good performance and has completed the requirements for shaped cigarette.
Lyubimov, Artem Y; Uervirojnangkoorn, Monarin; Zeldin, Oliver B; Brewster, Aaron S; Murray, Thomas D; Sauter, Nicholas K; Berger, James M; Weis, William I; Brunger, Axel T
2016-06-01
Serial femtosecond crystallography (SFX) uses an X-ray free-electron laser to extract diffraction data from crystals not amenable to conventional X-ray light sources owing to their small size or radiation sensitivity. However, a limitation of SFX is the high variability of the diffraction images that are obtained. As a result, it is often difficult to determine optimal indexing and integration parameters for the individual diffraction images. Presented here is a software package, called IOTA , which uses a grid-search technique to determine optimal spot-finding parameters that can in turn affect the success of indexing and the quality of integration on an image-by-image basis. Integration results can be filtered using a priori information about the Bravais lattice and unit-cell dimensions and analyzed for unit-cell isomorphism, facilitating an improvement in subsequent data-processing steps.
Xu, Yihua; Pitot, Henry C
2006-03-01
In the studies of quantitative stereology of rat hepatocarcinogenesis, we have used image analysis technology (automatic particle analysis) to obtain data such as liver tissue area, size and location of altered hepatic focal lesions (AHF), and nuclei counts. These data are then used for three-dimensional estimation of AHF occurrence and nuclear labeling index analysis. These are important parameters for quantitative studies of carcinogenesis, for screening and classifying carcinogens, and for risk estimation. To take such measurements, structures or cells of interest should be separated from the other components based on the difference of color and density. Common background problems seen on the captured sample image such as uneven light illumination or color shading can cause severe problems in the measurement. Two application programs (BK_Correction and Pixel_Separator) have been developed to solve these problems. With BK_Correction, common background problems such as incorrect color temperature setting, color shading, and uneven light illumination background, can be corrected. With Pixel_Separator different types of objects can be separated from each other in relation to their color, such as seen with different colors in immunohistochemically stained slides. The resultant images of such objects separated from other components are then ready for particle analysis. Objects that have the same darkness but different colors can be accurately differentiated in a grayscale image analysis system after application of these programs.
Along-Track Reef Imaging System (ATRIS)
Brock, John; Zawada, Dave
2006-01-01
"Along-Track Reef Imaging System (ATRIS)" describes the U.S. Geological Survey's Along-Track Reef Imaging System, a boat-based sensor package for rapidly mapping shallow water benthic environments. ATRIS acquires high resolution, color digital images that are accurately geo-located in real-time.
Turner, Clare E; Russell, Bruce R; Gant, Nicholas
2015-11-01
Magnetic resonance spectroscopy (MRS) is an analytical procedure that can be used to non-invasively measure the concentration of a range of neural metabolites. Creatine is an important neurometabolite with dietary supplementation offering therapeutic potential for neurological disorders with dysfunctional energetic processes. Neural creatine concentrations can be probed using proton MRS and quantified using a range of software packages based on different analytical methods. This experiment examines the differences in quantification performance of two commonly used analysis packages following a creatine supplementation strategy with potential therapeutic application. Human participants followed a seven day dietary supplementation regime in a placebo-controlled, cross-over design interspersed with a five week wash-out period. Spectroscopy data were acquired the day immediately following supplementation and analyzed with two commonly-used software packages which employ vastly different quantification methods. Results demonstrate that neural creatine concentration was augmented following creatine supplementation when analyzed using the peak fitting method of quantification (105.9%±10.1). In contrast, no change in neural creatine levels were detected with supplementation when analysis was conducted using the basis spectrum method of quantification (102.6%±8.6). Results suggest that software packages that employ the peak fitting procedure for spectral quantification are possibly more sensitive to subtle changes in neural creatine concentrations. The relative simplicity of the spectroscopy sequence and the data analysis procedure suggest that peak fitting procedures may be the most effective means of metabolite quantification when detection of subtle alterations in neural metabolites is necessary. The straightforward technique can be used on a clinical magnetic resonance imaging system. Copyright © 2015 Elsevier Inc. All rights reserved.
Diagnostic evaluation of three cardiac software packages using a consecutive group of patients
2011-01-01
Purpose The aim of this study was to compare the diagnostic performance of the three software packages 4DMSPECT (4DM), Emory Cardiac Toolbox (ECTb), and Cedars Quantitative Perfusion SPECT (QPS) for quantification of myocardial perfusion scintigram (MPS) using a large group of consecutive patients. Methods We studied 1,052 consecutive patients who underwent 2-day stress/rest 99mTc-sestamibi MPS studies. The reference/gold-standard classifications for the MPS studies were obtained from three physicians, with more than 25 years each of experience in nuclear cardiology, who re-evaluated all MPS images. Automatic processing was carried out using 4DM, ECTb, and QPS software packages. Total stress defect extent (TDE) and summed stress score (SSS) based on a 17-segment model were obtained from the software packages. Receiver-operating characteristic (ROC) analysis was performed. Results A total of 734 patients were classified as normal and the remaining 318 were classified as having infarction and/or ischemia. The performance of the software packages calculated as the area under the SSS ROC curve were 0.87 for 4DM, 0.80 for QPS, and 0.76 for ECTb (QPS vs. ECTb p = 0.03; other differences p < 0.0001). The area under the TDE ROC curve were 0.87 for 4DM, 0.82 for QPS, and 0.76 for ECTb (QPS vs. ECTb p = 0.0005; other differences p < 0.0001). Conclusion There are considerable differences in performance between the three software packages with 4DM showing the best performance and ECTb the worst. These differences in performance should be taken in consideration when software packages are used in clinical routine or in clinical studies. PMID:22214226
Canetta, Elisabetta; Adya, Ashok K
2011-07-15
Pressure sensitive adhesive (PSA), such as those used in packaging and adhesive tapes, are very often encountered in forensic investigations. In criminal activities, packaging tapes may be used for sealing packets containing drugs, explosive devices, or questioned documents, while adhesive and electrical tapes are used occasionally in kidnapping cases. In this work, the potential of using atomic force microscopy (AFM) in both imaging and force mapping (FM) modes to derive additional analytical information from PSAs is demonstrated. AFM has been used to illustrate differences in the ultrastructural and nanomechanical properties of three visually distinguishable commercial PSAs to first test the feasibility of using this technique. Subsequently, AFM was used to detect nanoscopic differences between three visually indistinguishable PSAs. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Mladinich, C.
2010-01-01
Human disturbance is a leading ecosystem stressor. Human-induced modifications include transportation networks, areal disturbances due to resource extraction, and recreation activities. High-resolution imagery and object-oriented classification rather than pixel-based techniques have successfully identified roads, buildings, and other anthropogenic features. Three commercial, automated feature-extraction software packages (Visual Learning Systems' Feature Analyst, ENVI Feature Extraction, and Definiens Developer) were evaluated by comparing their ability to effectively detect the disturbed surface patterns from motorized vehicle traffic. Each package achieved overall accuracies in the 70% range, demonstrating the potential to map the surface patterns. The Definiens classification was more consistent and statistically valid. Copyright ?? 2010 by Bellwether Publishing, Ltd. All rights reserved.
ImagingReso: A Tool for Neutron Resonance Imaging
Zhang, Yuxuan; Bilheux, Jean -Christophe
2017-11-01
ImagingReso is an open-source Python library that simulates the neutron resonance signal for neutron imaging measurements. By defining the sample information such as density, thickness in the neutron path, and isotopic ratios of the elemental composition of the material, this package plots the expected resonance peaks for a selected neutron energy range. Various sample types such as layers of single elements (Ag, Co, etc. in solid form), chemical compounds (UO 3, Gd 2O 3, etc.), or even multiple layers of both types can be plotted with this package. As a result, major plotting features include display of the transmission/attenuation inmore » wavelength, energy, and time scale, and show/hide elemental and isotopic contributions in the total resonance signal.« less
Automated site characterization for robotic sample acquisition systems
NASA Astrophysics Data System (ADS)
Scholl, Marija S.; Eberlein, Susan J.
1993-04-01
A mobile, semiautonomous vehicle with multiple sensors and on-board intelligence is proposed for performing preliminary scientific investigations on extraterrestrial bodies prior to human exploration. Two technologies, a hybrid optical-digital computer system based on optical correlator technology and an image and instrument data analysis system, provide complementary capabilities that might be part of an instrument package for an intelligent robotic vehicle. The hybrid digital-optical vision system could perform real-time image classification tasks using an optical correlator with programmable matched filters under control of a digital microcomputer. The data analysis system would analyze visible and multiband imagery to extract mineral composition and textural information for geologic characterization. Together these technologies would support the site characterization needs of a robotic vehicle for both navigational and scientific purposes.
The image of mathematics held by Irish post-primary students
NASA Astrophysics Data System (ADS)
Lane, Ciara; Stynes, Martin; O'Donoghue, John
2014-08-01
The image of mathematics held by Irish post-primary students was examined and a model for the image found was constructed. Initially, a definition for 'image of mathematics' was adopted with image of mathematics hypothesized as comprising attitudes, beliefs, self-concept, motivation, emotions and past experiences of mathematics. Research focused on students studying ordinary level mathematics for the Irish Leaving Certificate examination - the final examination for students in second-level or post-primary education. Students were aged between 15 and 18 years. A questionnaire was constructed with both quantitative and qualitative aspects. The questionnaire survey was completed by 356 post-primary students. Responses were analysed quantitatively using Statistical Package for the Social Sciences (SPSS) and qualitatively using the constant comparative method of analysis and by reviewing individual responses. Findings provide an insight into Irish post-primary students' images of mathematics and offer a means for constructing a theoretical model of image of mathematics which could be beneficial for future research.
NASA Technical Reports Server (NTRS)
1994-01-01
Acceptance data package - engineering drawings and associated lists for fabrication, assembly and maintenance (cleaning, fluidized bed coating, bounding and staking) motor/encoded solar x-ray imager (SXI) (Aeroflex p/n 16187) were given.
NASA Astrophysics Data System (ADS)
Megherbi, Najla; Breckon, Toby P.; Flitton, Greg T.
2013-10-01
3D Computed Tomography (CT) image segmentation is already well established tool in medical research and in routine daily clinical practice. However, such techniques have not been used in the context of 3D CT image segmentation for baggage and package security screening using CT imagery. CT systems are increasingly used in airports for security baggage examination. We propose in this contribution an investigation of the current 3D CT medical image segmentation methods for use in this new domain. Experimental results of 3D segmentation on real CT baggage security imagery using a range of techniques are presented and discussed.
Fast parallel image registration on CPU and GPU for diagnostic classification of Alzheimer's disease
Shamonin, Denis P.; Bron, Esther E.; Lelieveldt, Boudewijn P. F.; Smits, Marion; Klein, Stefan; Staring, Marius
2013-01-01
Nonrigid image registration is an important, but time-consuming task in medical image analysis. In typical neuroimaging studies, multiple image registrations are performed, i.e., for atlas-based segmentation or template construction. Faster image registration routines would therefore be beneficial. In this paper we explore acceleration of the image registration package elastix by a combination of several techniques: (i) parallelization on the CPU, to speed up the cost function derivative calculation; (ii) parallelization on the GPU building on and extending the OpenCL framework from ITKv4, to speed up the Gaussian pyramid computation and the image resampling step; (iii) exploitation of certain properties of the B-spline transformation model; (iv) further software optimizations. The accelerated registration tool is employed in a study on diagnostic classification of Alzheimer's disease and cognitively normal controls based on T1-weighted MRI. We selected 299 participants from the publicly available Alzheimer's Disease Neuroimaging Initiative database. Classification is performed with a support vector machine based on gray matter volumes as a marker for atrophy. We evaluated two types of strategies (voxel-wise and region-wise) that heavily rely on nonrigid image registration. Parallelization and optimization resulted in an acceleration factor of 4–5x on an 8-core machine. Using OpenCL a speedup factor of 2 was realized for computation of the Gaussian pyramids, and 15–60 for the resampling step, for larger images. The voxel-wise and the region-wise classification methods had an area under the receiver operator characteristic curve of 88 and 90%, respectively, both for standard and accelerated registration. We conclude that the image registration package elastix was substantially accelerated, with nearly identical results to the non-optimized version. The new functionality will become available in the next release of elastix as open source under the BSD license. PMID:24474917
TCIApathfinder: an R client for The Cancer Imaging Archive REST API.
Russell, Pamela; Fountain, Kelly; Wolverton, Dulcy; Ghosh, Debashis
2018-06-05
The Cancer Imaging Archive (TCIA) hosts publicly available de-identified medical images of cancer from over 25 body sites and over 30,000 patients. Over 400 published studies have utilized freely available TCIA images. Images and metadata are available for download through a web interface or a REST API. Here we present TCIApathfinder, an R client for the TCIA REST API. TCIApathfinder wraps API access in user-friendly R functions that can be called interactively within an R session or easily incorporated into scripts. Functions are provided to explore the contents of the large database and to download image files. TCIApathfinder provides easy access to TCIA resources in the highly popular R programming environment. TCIApathfinder is freely available under the MIT license as a package on CRAN (https://cran.r-project.org/web/packages/TCIApathfinder/index.html) and at https://github.com/pamelarussell/TCIApathfinder. Copyright ©2018, American Association for Cancer Research.
Evaluation of a High-Resolution Benchtop Micro-CT Scanner for Application in Porous Media Research
NASA Astrophysics Data System (ADS)
Tuller, M.; Vaz, C. M.; Lasso, P. O.; Kulkarni, R.; Ferre, T. A.
2010-12-01
Recent advances in Micro Computed Tomography (MCT) provided the motivation to thoroughly evaluate and optimize scanning, image reconstruction/segmentation and pore-space analysis capabilities of a new generation benchtop MCT scanner and associated software package. To demonstrate applicability to soil research the project was focused on determination of porosities and pore size distributions of two Brazilian Oxisols from segmented MCT-data. Effects of metal filters and various acquisition parameters (e.g. total rotation, rotation step, and radiograph frame averaging) on image quality and acquisition time are evaluated. Impacts of sample size and scanning resolution on CT-derived porosities and pore-size distributions are illustrated.
Web-Based Mapping Puts the World at Your Fingertips
NASA Technical Reports Server (NTRS)
2008-01-01
NASA's award-winning Earth Resources Laboratory Applications Software (ELAS) package was developed at Stennis Space Center. Since 1978, ELAS has been used worldwide for processing satellite and airborne sensor imagery data of the Earth's surface into readable and usable information. DATASTAR Inc., of Picayune, Mississippi, has used ELAS software in the DATASTAR Image Processing Exploitation (DIPEx) desktop and Internet image processing, analysis, and manipulation software. The new DIPEx Version III includes significant upgrades and improvements compared to its esteemed predecessor. A true World Wide Web application, this product evolved with worldwide geospatial dimensionality and numerous other improvements that seamlessly support the World Wide Web version.
Ma_Miss Experiment: miniaturized imaging spectrometer for subsurface studies
NASA Astrophysics Data System (ADS)
Coradini, A.; Ammannito, E.; Boccaccini, A.; de Sanctis, M. C.; di Iorio, T.; Battistelli, E.; Capanni, A.
2011-10-01
The study of the Martian subsurface will provide important constraints on the nature, timing and duration of alteration and sedimentation processes on Mars, as well as on the complex interactions between the surface and the atmosphere. A Drilling system, coupled with an in situ analysis package, is installed on the Exomars-Pasteur Rover to perform in situ investigations up to 2m in the Mars soil. Ma_Miss (Mars Multispectral Imager for Subsurface Studies) is a spectrometer devoted to observe the lateral wall of the borehole generated by the Drilling system. The instrument is fully integrated with the Drill and shares its structure and electronics.
Astrometric and Photometric Analysis of the September 2008 ATV-1 Re-Entry Event
NASA Technical Reports Server (NTRS)
Mulrooney, Mark K.; Barker, Edwin S.; Maley, Paul D.; Beaulieu, Kevin R.; Stokely, Christopher L.
2008-01-01
NASA utilized Image Intensified Video Cameras for ATV data acquisition from a jet flying at 12.8 km. Afterwards the video was digitized and then analyzed with a modified commercial software package, Image Systems Trackeye. Astrometric results were limited by saturation, plate scale, and imposed linear plate solution based on field reference stars. Time-dependent fragment angular trajectories, velocities, accelerations, and luminosities were derived in each video segment. It was evident that individual fragments behave differently. Photometric accuracy was insufficient to confidently assess correlations between luminosity and fragment spatial behavior (velocity, deceleration). Use of high resolution digital video cameras in future should remedy this shortcoming.
Kinematics Simulation Analysis of Packaging Robot with Joint Clearance
NASA Astrophysics Data System (ADS)
Zhang, Y. W.; Meng, W. J.; Wang, L. Q.; Cui, G. H.
2018-03-01
Considering the influence of joint clearance on the motion error, repeated positioning accuracy and overall position of the machine, this paper presents simulation analysis of a packaging robot — 2 degrees of freedom(DOF) planar parallel robot based on the characteristics of high precision and fast speed of packaging equipment. The motion constraint equation of the mechanism is established, and the analysis and simulation of the motion error are carried out in the case of turning the revolute clearance. The simulation results show that the size of the joint clearance will affect the movement accuracy and packaging efficiency of the packaging robot. The analysis provides a reference point of view for the packaging equipment design and selection criteria and has a great significance on the packaging industry automation.
Watermarking spot colors in packaging
NASA Astrophysics Data System (ADS)
Reed, Alastair; Filler, TomáÅ.¡; Falkenstern, Kristyn; Bai, Yang
2015-03-01
In January 2014, Digimarc announced Digimarc® Barcode for the packaging industry to improve the check-out efficiency and customer experience for retailers. Digimarc Barcode is a machine readable code that carries the same information as a traditional Universal Product Code (UPC) and is introduced by adding a robust digital watermark to the package design. It is imperceptible to the human eye but can be read by a modern barcode scanner at the Point of Sale (POS) station. Compared to a traditional linear barcode, Digimarc Barcode covers the whole package with minimal impact on the graphic design. This significantly improves the Items per Minute (IPM) metric, which retailers use to track the checkout efficiency since it closely relates to their profitability. Increasing IPM by a few percent could lead to potential savings of millions of dollars for retailers, giving them a strong incentive to add the Digimarc Barcode to their packages. Testing performed by Digimarc showed increases in IPM of at least 33% using the Digimarc Barcode, compared to using a traditional barcode. A method of watermarking print ready image data used in the commercial packaging industry is described. A significant proportion of packages are printed using spot colors, therefore spot colors needs to be supported by an embedder for Digimarc Barcode. Digimarc Barcode supports the PANTONE spot color system, which is commonly used in the packaging industry. The Digimarc Barcode embedder allows a user to insert the UPC code in an image while minimizing perceptibility to the Human Visual System (HVS). The Digimarc Barcode is inserted in the printing ink domain, using an Adobe Photoshop plug-in as the last step before printing. Since Photoshop is an industry standard widely used by pre-press shops in the packaging industry, a Digimarc Barcode can be easily inserted and proofed.
NASA Technical Reports Server (NTRS)
Rochon, Gilbert L.
1989-01-01
A user requirements analysis (URA) was undertaken to determine and appropriate public domain Geographic Information System (GIS) software package for potential integration with NASA's LAS (Land Analysis System) 5.0 image processing system. The necessity for a public domain system was underscored due to the perceived need for source code access and flexibility in tailoring the GIS system to the needs of a heterogenous group of end-users, and to specific constraints imposed by LAS and its user interface, Transportable Applications Executive (TAE). Subsequently, a review was conducted of a variety of public domain GIS candidates, including GRASS 3.0, MOSS, IEMIS, and two university-based packages, IDRISI and KBGIS. The review method was a modified version of the GIS evaluation process, development by the Federal Interagency Coordinating Committee on Digital Cartography. One IEMIS-derivative product, the ALBE (AirLand Battlefield Environment) GIS, emerged as the most promising candidate for integration with LAS. IEMIS (Integrated Emergency Management Information System) was developed by the Federal Emergency Management Agency (FEMA). ALBE GIS is currently under development at the Pacific Northwest Laboratory under contract with the U.S. Army Corps of Engineers' Engineering Topographic Laboratory (ETL). Accordingly, recommendations are offered with respect to a potential LAS/ALBE GIS linkage and with respect to further system enhancements, including coordination with the development of the Spatial Analysis and Modeling System (SAMS) GIS in Goddard's IDM (Intelligent Data Management) developments in Goddard's National Space Science Data Center.
scarlet: Source separation in multi-band images by Constrained Matrix Factorization
NASA Astrophysics Data System (ADS)
Melchior, Peter; Moolekamp, Fred; Jerdee, Maximilian; Armstrong, Robert; Sun, Ai-Lei; Bosch, James; Lupton, Robert
2018-03-01
SCARLET performs source separation (aka "deblending") on multi-band images. It is geared towards optical astronomy, where scenes are composed of stars and galaxies, but it is straightforward to apply it to other imaging data. Separation is achieved through a constrained matrix factorization, which models each source with a Spectral Energy Distribution (SED) and a non-parametric morphology, or multiple such components per source. The code performs forced photometry (with PSF matching if needed) using an optimal weight function given by the signal-to-noise weighted morphology across bands. The approach works well if the sources in the scene have different colors and can be further strengthened by imposing various additional constraints/priors on each source. Because of its generic utility, this package provides a stand-alone implementation that contains the core components of the source separation algorithm. However, the development of this package is part of the LSST Science Pipeline; the meas_deblender package contains a wrapper to implement the algorithms here for the LSST stack.
Laptop Computer - Based Facial Recognition System Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. A. Cain; G. B. Singleton
2001-03-01
The objective of this project was to assess the performance of the leading commercial-off-the-shelf (COTS) facial recognition software package when used as a laptop application. We performed the assessment to determine the system's usefulness for enrolling facial images in a database from remote locations and conducting real-time searches against a database of previously enrolled images. The assessment involved creating a database of 40 images and conducting 2 series of tests to determine the product's ability to recognize and match subject faces under varying conditions. This report describes the test results and includes a description of the factors affecting the results.more » After an extensive market survey, we selected Visionics' FaceIt{reg_sign} software package for evaluation and a review of the Facial Recognition Vendor Test 2000 (FRVT 2000). This test was co-sponsored by the US Department of Defense (DOD) Counterdrug Technology Development Program Office, the National Institute of Justice, and the Defense Advanced Research Projects Agency (DARPA). Administered in May-June 2000, the FRVT 2000 assessed the capabilities of facial recognition systems that were currently available for purchase on the US market. Our selection of this Visionics product does not indicate that it is the ''best'' facial recognition software package for all uses. It was the most appropriate package based on the specific applications and requirements for this specific application. In this assessment, the system configuration was evaluated for effectiveness in identifying individuals by searching for facial images captured from video displays against those stored in a facial image database. An additional criterion was that the system be capable of operating discretely. For this application, an operational facial recognition system would consist of one central computer hosting the master image database with multiple standalone systems configured with duplicates of the master operating in remote locations. Remote users could perform real-time searches where network connectivity is not available. As images are enrolled at the remote locations, periodic database synchronization is necessary.« less
Quantitative evaluation of software packages for single-molecule localization microscopy.
Sage, Daniel; Kirshner, Hagai; Pengo, Thomas; Stuurman, Nico; Min, Junhong; Manley, Suliana; Unser, Michael
2015-08-01
The quality of super-resolution images obtained by single-molecule localization microscopy (SMLM) depends largely on the software used to detect and accurately localize point sources. In this work, we focus on the computational aspects of super-resolution microscopy and present a comprehensive evaluation of localization software packages. Our philosophy is to evaluate each package as a whole, thus maintaining the integrity of the software. We prepared synthetic data that represent three-dimensional structures modeled after biological components, taking excitation parameters, noise sources, point-spread functions and pixelation into account. We then asked developers to run their software on our data; most responded favorably, allowing us to present a broad picture of the methods available. We evaluated their results using quantitative and user-interpretable criteria: detection rate, accuracy, quality of image reconstruction, resolution, software usability and computational resources. These metrics reflect the various tradeoffs of SMLM software packages and help users to choose the software that fits their needs.
Heller, Rebecca; Martin-Biggers, Jennifer; Berhaupt-Glickstein, Amanda; Quick, Virginia; Byrd-Bredbenner, Carol
2015-10-01
To determine whether food label information and advertisements for foods containing no fruit cause children to have a false impression of the foods' fruit content. In the food label condition, a trained researcher showed each child sixteen different food label photographs depicting front-of-food label packages that varied with regard to fruit content (i.e. real fruit v. sham fruit) and label elements. In the food advertisement condition, children viewed sixteen, 30 s television food advertisements with similar fruit content and label elements as in the food label condition. After viewing each food label and advertisement, children responded to the question 'Did they use fruit to make this?' with responses of yes, no or don't know. Schools, day-care centres, after-school programmes and other community groups. Children aged 4-7 years. In the food label condition, χ 2 analysis of within fruit content variation differences indicated children (n 58; mean age 4·2 years) were significantly more accurate in identifying real fruit foods as the label's informational load increased and were least accurate when neither a fruit name nor an image was on the label. Children (n 49; mean age 5·4 years) in the food advertisement condition were more likely to identify real fruit foods when advertisements had fruit images compared with when no image was included, while fruit images in advertisements for sham fruit foods significantly reduced accuracy of responses. Findings suggest that labels and advertisements for sham fruit foods mislead children with regard to the food's real fruit content.
NASA Astrophysics Data System (ADS)
Forsberg, Fredrik; Roxhed, Niclas; Fischer, Andreas C.; Samel, Björn; Ericsson, Per; Hoivik, Nils; Lapadatu, Adriana; Bring, Martin; Kittilsland, Gjermund; Stemme, Göran; Niklaus, Frank
2013-09-01
Imaging in the long wavelength infrared (LWIR) range from 8 to 14 μm is an extremely useful tool for non-contact measurement and imaging of temperature in many industrial, automotive and security applications. However, the cost of the infrared (IR) imaging components has to be significantly reduced to make IR imaging a viable technology for many cost-sensitive applications. This paper demonstrates new and improved fabrication and packaging technologies for next-generation IR imaging detectors based on uncooled IR bolometer focal plane arrays. The proposed technologies include very large scale heterogeneous integration for combining high-performance, SiGe quantum-well bolometers with electronic integrated read-out circuits and CMOS compatible wafer-level vacuum packing. The fabrication and characterization of bolometers with a pitch of 25 μm × 25 μm that are arranged on read-out-wafers in arrays with 320 × 240 pixels are presented. The bolometers contain a multi-layer quantum well SiGe thermistor with a temperature coefficient of resistance of -3.0%/K. The proposed CMOS compatible wafer-level vacuum packaging technology uses Cu-Sn solid-liquid interdiffusion (SLID) bonding. The presented technologies are suitable for implementation in cost-efficient fabless business models with the potential to bring about the cost reduction needed to enable low-cost IR imaging products for industrial, security and automotive applications.
NASA Astrophysics Data System (ADS)
Gampe, David; Huber García, Verena; Marzahn, Philip; Ludwig, Ralf
2017-04-01
Actual evaporation (Eta) is an essential variable to assess water availability, drought risk and food security, among others. Measurements of Eta are however limited to a small footprint, hampering a spatially explicit analysis and application and are very often not available at all. To overcome the problem of data scarcity, Eta can be assessed by various remote sensing approaches such as the Triangle Method (Jiang & Islam, 1999). Here, Eta is estimated by using the Normalized Difference Vegetation Index (NDVI) and land surface temperature (LST). In this study, the R-package 'TriangleMethod' was compiled to efficiently perform the calculations of NDVI and processing LST to finally derive Eta from the applied data set. The package contains all necessary calculation steps and allows easy processing of a large data base of remote sensing images. By default, the parameterization for the Landsat TM and ETM+ sensors are implemented, however, the algorithms can be easily extended to additional sensors. The auxiliary variables required to estimate Eta with this method, such as elevation, solar radiation and air temperature at the overpassing time, can be processed as gridded information to allow for a better representation of the study area. The package was successfully applied in various studies in Spain, Palestine, Costa Rica and Canada.
Shen, Chaobo; Hai, Zhou; Zhao, Cong; Zhang, Jiawei; Evans, John L.; Bozack, Michael J.; Suhling, Jeffrey C.
2017-01-01
This study illustrates test results and comparative literature data on the influence of isothermal aging and thermal cycling associated with Sn-1.0Ag-0.5Cu (SAC105) and Sn-3.0Ag-0.5Cu (SAC305) ball grid array (BGA) solder joints finished with ENIG and ENEPIG on the board side and ENIG on the package side compared with ImAg plating on both sides. The resulting degradation data suggests that the main concern for 0.4 mm pitch 10 mm package size BGA is package side surface finish, not board side. That is, ENIG performs better than immersion Ag for applications involving long-term isothermal aging. SAC305, with a higher relative fraction of Ag3Sn IMC within the solder, performs better than SAC105. SEM and polarized light microscope analysis show cracks propagated from the corners to the center or even to solder bulk, which eventually causes fatigue failure. Three factors are discussed: IMC, grain structure, and Ag3Sn particle. The continuous growth of Cu-Sn intermetallic compounds (IMC) and grains increase the risk of failure, while Ag3Sn particles seem helpful in blocking the crack propagation. PMID:28772811
GenomeDiagram: a python package for the visualization of large-scale genomic data.
Pritchard, Leighton; White, Jennifer A; Birch, Paul R J; Toth, Ian K
2006-03-01
We present GenomeDiagram, a flexible, open-source Python module for the visualization of large-scale genomic, comparative genomic and other data with reference to a single chromosome or other biological sequence. GenomeDiagram may be used to generate publication-quality vector graphics, rastered images and in-line streamed graphics for webpages. The package integrates with datatypes from the BioPython project, and is available for Windows, Linux and Mac OS X systems. GenomeDiagram is freely available as source code (under GNU Public License) at http://bioinf.scri.ac.uk/lp/programs.html, and requires Python 2.3 or higher, and recent versions of the ReportLab and BioPython packages. A user manual, example code and images are available at http://bioinf.scri.ac.uk/lp/programs.html.
Report of AAPM Task Group 162: Software for planar image quality metrology.
Samei, Ehsan; Ikejimba, Lynda C; Harrawood, Brian P; Rong, John; Cunningham, Ian A; Flynn, Michael J
2018-02-01
The AAPM Task Group 162 aimed to provide a standardized approach for the assessment of image quality in planar imaging systems. This report offers a description of the approach as well as the details of the resultant software bundle to measure detective quantum efficiency (DQE) as well as its basis components and derivatives. The methodology and the associated software include the characterization of the noise power spectrum (NPS) from planar images acquired under specific acquisition conditions, modulation transfer function (MTF) using an edge test object, the DQE, and effective DQE (eDQE). First, a methodological framework is provided to highlight the theoretical basis of the work. Then, a step-by-step guide is included to assist in proper execution of each component of the code. Lastly, an evaluation of the method is included to validate its accuracy against model-based and experimental data. The code was built using a Macintosh OSX operating system. The software package contains all the source codes to permit an experienced user to build the suite on a Linux or other *nix type system. The package further includes manuals and sample images and scripts to demonstrate use of the software for new users. The results of the code are in close alignment with theoretical expectations and published results of experimental data. The methodology and the software package offered in AAPM TG162 can be used as baseline for characterization of inherent image quality attributes of planar imaging systems. © 2017 American Association of Physicists in Medicine.
Urban, Trinity; Ziegler, Erik; Lewis, Rob; Hafey, Chris; Sadow, Cheryl; Van den Abbeele, Annick D; Harris, Gordon J
2017-11-01
Oncology clinical trials have become increasingly dependent upon image-based surrogate endpoints for determining patient eligibility and treatment efficacy. As therapeutics have evolved and multiplied in number, the tumor metrics criteria used to characterize therapeutic response have become progressively more varied and complex. The growing intricacies of image-based response evaluation, together with rising expectations for rapid and consistent results reporting, make it difficult for site radiologists to adequately address local and multicenter imaging demands. These challenges demonstrate the need for advanced cancer imaging informatics tools that can help ensure protocol-compliant image evaluation while simultaneously promoting reviewer efficiency. LesionTracker is a quantitative imaging package optimized for oncology clinical trial workflows. The goal of the project is to create an open source zero-footprint viewer for image analysis that is designed to be extensible as well as capable of being integrated into third-party systems for advanced imaging tools and clinical trials informatics platforms. Cancer Res; 77(21); e119-22. ©2017 AACR . ©2017 American Association for Cancer Research.
Description of the IV + V System Software Package.
ERIC Educational Resources Information Center
Microcomputers for Information Management: An International Journal for Library and Information Services, 1984
1984-01-01
Describes the IV + V System, a software package designed by the Institut fur Maschinelle Dokumentation for the United Nations General Information Programme and UNISIST to support automation of local information and documentation services. Principle program features and functions outlined include input/output, databank, text image, output, and…
NASA Astrophysics Data System (ADS)
Park, Joong Yong; Tuell, Grady
2010-04-01
The Data Processing System (DPS) of the Coastal Zone Mapping and Imaging Lidar (CZMIL) has been designed to automatically produce a number of novel environmental products through the fusion of Lidar, spectrometer, and camera data in a single software package. These new products significantly transcend use of the system as a bathymeter, and support use of CZMIL as a complete coastal and benthic mapping tool. The DPS provides a spinning globe capability for accessing data files; automated generation of combined topographic and bathymetric point clouds; a fully-integrated manual editor and data analysis tool; automated generation of orthophoto mosaics; automated generation of reflectance data cubes from the imaging spectrometer; a coupled air-ocean spectral optimization model producing images of chlorophyll and CDOM concentrations; and a fusion based capability to produce images and classifications of the shallow water seafloor. Adopting a multitasking approach, we expect to achieve computation of the point clouds, DEMs, and reflectance images at a 1:1 processing to acquisition ratio.
XAPiir: A recursive digital filtering package
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, D.
1990-09-21
XAPiir is a basic recursive digital filtering package, containing both design and implementation subroutines. XAPiir was developed for the experimental array processor (XAP) software package, and is written in FORTRAN. However, it is intended to be incorporated into any general- or special-purpose signal analysis program. It replaces the older package RECFIL, offering several enhancements. RECFIL is used in several large analysis programs developed at LLNL, including the seismic analysis package SAC, several expert systems (NORSEA and NETSEA), and two general purpose signal analysis packages (SIG and VIEW). This report is divided into two sections: the first describes the use ofmore » the subroutine package, and the second, its internal organization. In the first section, the filter design problem is briefly reviewed, along with the definitions of the filter design parameters and their relationship to the subroutine input parameters. In the second section, the internal organization is documented to simplify maintenance and extensions to the package. 5 refs., 9 figs.« less
Atrioventricular junction (AVJ) motion tracking: a software tool with ITK/VTK/Qt.
Pengdong Xiao; Shuang Leng; Xiaodan Zhao; Hua Zou; Ru San Tan; Wong, Philip; Liang Zhong
2016-08-01
The quantitative measurement of the Atrioventricular Junction (AVJ) motion is an important index for ventricular functions of one cardiac cycle including systole and diastole. In this paper, a software tool that can conduct AVJ motion tracking from cardiovascular magnetic resonance (CMR) images is presented by using Insight Segmentation and Registration Toolkit (ITK), The Visualization Toolkit (VTK) and Qt. The software tool is written in C++ by using Visual Studio Community 2013 integrated development environment (IDE) containing both an editor and a Microsoft complier. The software package has been successfully implemented. From the software engineering practice, it is concluded that ITK, VTK, and Qt are very handy software systems to implement automatic image analysis functions for CMR images such as quantitative measure of motion by visual tracking.
Hänsel, N H; Schubert, G A; Scholz, B; Nikoubashman, O; Othman, A E; Wiesmann, M; Pjontek, R; Brockmann, M A
2018-02-01
To compare the diagnostic quality of time-of-flight magnetic resonance angiography (TOF-MRA) and metal-artefact-reduction (MAR) flat-panel-detector computed tomography angiography (FPCTA) and to determine the imaging technique best suited for evaluation endovascular and surgically treated aneurysms. The image quality of TOF-MRA and MAR-FPCTA of 44 intracranial implants (coiling: n=20; clipping: n=15; coiling + stenting: n=9) in a patient cohort of 25 was evaluated by two independent readers. Images obtained using MAR-FPCTA (20 second scan time, 496 projections, intravenous contrast medium administration; Artis Zee, Siemens Healthcare, Forchheim) were compared with TOF-MRA-images (1.5 or 3 T). Nominal data were analysed using McNemar's chi-square test and ordinal variables using the Wilcoxon rank test. Compared to TOF-MRA, MAR-FPCTA was significantly better suited to detect aneurysm remnants and to evaluate parent vessels after clipping (p<0.01). For coil packages >160 mm 3 , TOF-MRA provided significantly better assessment than MAR-FPCTA (p<0.01). For small coil packages (<160 mm 3 ), no significant difference between TOF-MRA and MAR-FPCTA (p=0.232) was observed. For different clip sizes (cut-off 492 mm 3 ) likewise no significant differences were found. The interobserver comparison showed high interrater agreement. MAR-FPCTA is significantly better suited for follow-up examinations of clipped aneurysms, whereas for larger coil packages TOF-MRA is preferable. Smaller coil packages can be analysed using MAR-FPCTA or TOF-MRA. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Development of low-SWaP and low-noise InGaAs detectors
NASA Astrophysics Data System (ADS)
Fraenkel, R.; Berkowicz, E.; Bikov, L.; Elishkov, R.; Giladi, A.; Hirsh, I.; Ilan, E.; Jakobson, C.; Kondrashov, P.; Louzon, E.; Nevo, I.; Pivnik, I.; Tuito, A.; Vasserman, S.
2017-02-01
In recent years SCD has developed InGaAs/InP technology for Short-Wave Infrared (SWIR) imaging. The first product, Cardinal 640, has a 640×512 (VGA) format at 15μm pitch, and more than two thousand units have already been delivered to customers. Recently we have also introduced Cardinal 1280 which is an SXGA array with 10μm pitch aimed for long-range high end platforms [1]. One of the big challenges facing the SWIR technology is its proliferation to widespread low cost and low SWaP applications, specifically Low Light Level (LLL) and Image Intensifier (II) replacements. In order to achieve this goal we have invested and combined efforts in several design and development directions: 1. Optimization of the InGaAs pixel array, reducing the dark current below 2fA at 20° C in order to save TEC cooling power under harsh light and environmental conditions. 2. Design of a new "Low Noise" ROIC targeting 15e noise floor and improved active imaging capabilities 3. Design of compact, low SWaP and low cost packages. In this context we have developed 2 types of packages: a non-hermetic package with thermo-electric cooler (TEC) and a hermetic TEC-Less ceramic package. 4. Development of efficient TEC-Less algorithms for optimal imaging at both day-light and low light level conditions. The result of these combined efforts is a compact low SWaP detector that provides equivalent performance to Gen III image intensifier under starlight conditions. In this paper we will present results from lab and field experiments that will support this claim.
Imaging of drug smuggling by body packing.
Sica, Giacomo; Guida, Franco; Bocchini, Giorgio; Iaselli, Francesco; Iadevito, Isabella; Scaglione, Mariano
2015-02-01
Body packing, pushing, and stuffing are hazardous practices with complex medicolegal and social implications. A radiologist plays both a social and a medicolegal role in their assessment, and it should not be limited only to the identification of the packages but must also provide accurate information about their number and their exact location so as to prevent any package remains in the body packer. Radiologists must also be able to recognize the complications associated with these risky practices. Imaging assessment of body packing is performed essentially through plain abdominal X-ray and computed tomography scans. Ultrasound and magnetic resonance imaging, although with some advantages, actually have a limited use. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Klingelhöfer, G.; Romstedt, J.; Henkel, H.; Michaelis, H.; Brückner, J.; D'Uston, C.
A first order requirement for any spacecraft mission to land on a solid planetary or moon surface is instrumentation for in-situ mineralogical and chemical analysis 2 Such analysis provide data needed for primary classification and characterization of surface materials present We will discuss a mobile instrument package we have developed for in-situ investigations under harsh environmental conditions like on Mercury or Mars This Geochemistry Instrument Package Facility is a compact box also called payload cab containing three small advanced geochemistry mineralogy instruments the chemical spectrometer APXS the mineralogical M o ssbauer spectrometer MIMOS II 3 and a textural imager close-up camera The payload cab is equipped with two actuating arms with two degrees of freedom permitting precision placement of all instruments at a chosen sample This payload cab is the central part of the small rover Nanokhod which has the size of a shoebox 1 The Nanokhod rover is a tethered system with a typical operational range of sim 100 m Of course the payload cab itself can be attached by means of its arms to any deployment device of any other rover or deployment device 1 Andre Schiele Jens Romstedt Chris Lee Sabine Klinkner Rudi Rieder Ralf Gellert G o star Klingelh o fer Bodo Bernhardt Harald Michaelis The new NANOKHOD Engineeering model for extreme cold environments 8th International symposium on Artificial Intelligence Robotics and Automation in Space 5 - 9 September 2005
DeepInfer: open-source deep learning deployment toolkit for image-guided therapy
NASA Astrophysics Data System (ADS)
Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A.; Kapur, Tina; Wells, William M.; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang
2017-03-01
Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research work ows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose "DeepInfer" - an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections.
DeepInfer: Open-Source Deep Learning Deployment Toolkit for Image-Guided Therapy.
Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A; Kapur, Tina; Wells, William M; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang
2017-02-11
Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research workflows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose "DeepInfer" - an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections.
DeepInfer: Open-Source Deep Learning Deployment Toolkit for Image-Guided Therapy
Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A.; Kapur, Tina; Wells, William M.; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang
2017-01-01
Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research workflows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose “DeepInfer” – an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections. PMID:28615794
Craniux: A LabVIEW-Based Modular Software Framework for Brain-Machine Interface Research
2011-01-01
open-source BMI software solu- tions are currently available, we feel that the Craniux software package fills a specific need in the realm of BMI...data, such as cortical source imaging using EEG or MEG recordings. It is with these characteristics in mind that we feel the Craniux software package...S. Adee, “Dean Kamen’s ‘luke arm’ prosthesis readies for clinical trials,” IEEE Spectrum, February 2008, http://spectrum .ieee.org/biomedical
The development of a digitising service centre for natural history collections
Tegelberg, Riitta; Haapala, Jaana; Mononen, Tero; Pajari, Mika; Saarenmaa, Hannu
2012-01-01
Abstract Digitarium is a joint initiative of the Finnish Museum of Natural History and the University of Eastern Finland. It was established in 2010 as a dedicated shop for the large-scale digitisation of natural history collections. Digitarium offers service packages based on the digitisation process, including tagging, imaging, data entry, georeferencing, filtering, and validation. During the process, all specimens are imaged, and distance workers take care of the data entry from the images. The customer receives the data in Darwin Core Archive format, as well as images of the specimens and their labels. Digitarium also offers the option of publishing images through Morphbank, sharing data through GBIF, and archiving data for long-term storage. Service packages can also be designed on demand to respond to the specific needs of the customer. The paper also discusses logistics, costs, and intellectual property rights (IPR) issues related to the work that Digitarium undertakes. PMID:22859879
Autonomous microexplosives subsurface tracing system final report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engler, Bruce Phillip; Nogan, John; Melof, Brian Matthew
The objective of the autonomous micro-explosive subsurface tracing system is to image the location and geometry of hydraulically induced fractures in subsurface petroleum reservoirs. This system is based on the insertion of a swarm of autonomous micro-explosive packages during the fracturing process, with subsequent triggering of the energetic material to create an array of micro-seismic sources that can be detected and analyzed using existing seismic receiver arrays and analysis software. The project included investigations of energetic mixtures, triggering systems, package size and shape, and seismic output. Given the current absence of any technology capable of such high resolution mapping ofmore » subsurface structures, this technology has the potential for major impact on petroleum industry, which spends approximately $1 billion dollar per year on hydraulic fracturing operations in the United States alone.« less
8-Bit Gray Scale Images of Fingerprint Image Groups
National Institute of Standards and Technology Data Gateway
NIST 8-Bit Gray Scale Images of Fingerprint Image Groups (Web, free access) The NIST database of fingerprint images contains 2000 8-bit gray scale fingerprint image pairs. A newer version of the compression/decompression software on the CDROM can be found at the website http://www.nist.gov/itl/iad/ig/nigos.cfm as part of the NBIS package.
Healthy choice?: Exploring how children evaluate the healthfulness of packaged foods.
Elliott, Charlene; Brierley, Meaghan
2012-11-06
Today's supermarket contains hundreds of packaged foods specifically targeted at children. Yet research has shown that children are confused by the various visual messages found on packaged food products. This study explores children's nutrition knowledge with regard to packaged food products, to uncover strengths and difficulties they have in evaluating the healthfulness of these foods. Focus groups were conducted with children (grades 1-6). Particular attention was paid to the ways children made use of what they know about nutrition when faced with the visual elements and appeals presented on food packaging. Children relied heavily on packages' written and visual aspects--including colour, images, spokes-characters, front-of-package claims--to assess the healthfulness of a food product. These elements interfere with children's ability to make healthy choices when it comes to packaged foods. Choosing healthy packaged foods is challenging for children due to competing sets of knowledge: one pertains to their understanding of visual, associational cues; the other, to translating their understanding of nutrition to packaged foods. Canada's Food Guide, along with the curriculum taught to Canadian children at schools, does not appear to provide children with the tools necessary to navigate a food environment dominated by packaged foods.
XDesign: an open-source software package for designing X-ray imaging phantoms and experiments.
Ching, Daniel J; Gürsoy, Dogˇa
2017-03-01
The development of new methods or utilization of current X-ray computed tomography methods is impeded by the substantial amount of expertise required to design an X-ray computed tomography experiment from beginning to end. In an attempt to make material models, data acquisition schemes and reconstruction algorithms more accessible to researchers lacking expertise in some of these areas, a software package is described here which can generate complex simulated phantoms and quantitatively evaluate new or existing data acquisition schemes and image reconstruction algorithms for targeted applications.
XDesign: An open-source software package for designing X-ray imaging phantoms and experiments
Ching, Daniel J.; Gursoy, Dogˇa
2017-02-21
Here, the development of new methods or utilization of current X-ray computed tomography methods is impeded by the substantial amount of expertise required to design an X-ray computed tomography experiment from beginning to end. In an attempt to make material models, data acquisition schemes and reconstruction algorithms more accessible to researchers lacking expertise in some of these areas, a software package is described here which can generate complex simulated phantoms and quantitatively evaluate new or existing data acquisition schemes and image reconstruction algorithms for targeted applications.
Scherer, Michael D; Kattadiyil, Mathew T; Parciak, Ewa; Puri, Shweta
2014-01-01
Three-dimensional radiographic imaging for dental implant treatment planning is gaining widespread interest and popularity. However, application of the data from 30 imaging can be a complex and daunting process initially. The purpose of this article is to describe features of three software packages and the respective computerized guided surgical templates (GST) fabricated from them. A step-by-step method of interpreting and ordering a GST to simplify the process of the surgical planning and implant placement is discussed.
Responding Creatively to Bone and Blaise (2015) through Packaging, Drawing and Assembling
ERIC Educational Resources Information Center
Potts, Miriam
2017-01-01
In this colloquium, the author responds artistically to Bone and Blaise's article "An uneasy assemblage: Prisoners, animals, asylum-seeking children and posthuman packaging," published in "Contemporary Issues in Early Childhood in 2015" (EJ1058615), continuing their trajectory of "different kinds of images than those…
Shallow water benthic imaging and substrate characterization using recreational-grade sidescan-sonar
Buscombe, Daniel D.
2017-01-01
In recent years, lightweight, inexpensive, vessel-mounted ‘recreational grade’ sonar systems have rapidly grown in popularity among aquatic scientists, for swath imaging of benthic substrates. To promote an ongoing ‘democratization’ of acoustical imaging of shallow water environments, methods to carry out geometric and radiometric correction and georectification of sonar echograms are presented, based on simplified models for sonar-target geometry and acoustic backscattering and attenuation in shallow water. Procedures are described for automated removal of the acoustic shadows, identification of bed-water interface for situations when the water is too turbid or turbulent for reliable depth echosounding, and for automated bed substrate classification based on singlebeam full-waveform analysis. These methods are encoded in an open-source and freely-available software package, which should further facilitate use of recreational-grade sidescan sonar, in a fully automated and objective manner. The sequential correction, mapping, and analysis steps are demonstrated using a data set from a shallow freshwater environment.
Application of remote sensing to state and regional problems. [mississippi
NASA Technical Reports Server (NTRS)
Miller, W. F.; Powers, J. S.; Clark, J. R.; Solomon, J. L.; Williams, S. G. (Principal Investigator)
1981-01-01
The methods and procedures used, accomplishments, current status, and future plans are discussed for each of the following applications of LANDSAT in Mississippi: (1) land use planning in Lowndes County; (2) strip mine inventory and reclamation; (3) white-tailed deer habitat evaluation; (4) remote sensing data analysis support systems; (5) discrimination of unique forest habitats in potential lignite areas; (6) changes in gravel operations; and (7) determining freshwater wetlands for inventory and monitoring. The documentation of all existing software and the integration of the image analysis and data base software into a single package are now considered very high priority items.
International Lens Design Conference, Monterey, CA, June 11-14, 1990, Proceedings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawrence, G.N.
1990-01-01
The present conference on lens design encompasses physical and geometrical optics, diffractive optics, the optimization of optical design, software packages, ray tracing, the use of artificial intelligence, the achromatization of materials, zoom optics, microoptics and GRIN lenses, and IR lens design. Specific issues addressed include diffraction-performance calculations in lens design, the optimization of the optical transfer function, a rank-down method for automatic lens design, applications of quadric surfaces, the correction of aberrations by using HOEs in UV and visible imaging systems, and an all-refractive telescope for intersatellite communications. Also addressed are automation techniques for optics manufacturing, all-reflective phased-array imaging telescopes,more » the thermal aberration analysis of a Nd:YAG laser, the analysis of illumination systems, athermalized FLIR optics, and the design of array systems using shared symmetry.« less
Computer-aided light sheet flow visualization using photogrammetry
NASA Technical Reports Server (NTRS)
Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.
1994-01-01
A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and a visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) results, was chosen to interactively display the reconstructed light sheet images with the numerical surface geometry for the model or aircraft under study. The photogrammetric reconstruction technique and the image processing and computer graphics techniques and equipment are described. Results of the computer-aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images with CFD solutions in the same graphics environment is also demonstrated.
Computer-Aided Light Sheet Flow Visualization
NASA Technical Reports Server (NTRS)
Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.
1993-01-01
A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) data sets, was chosen to interactively display the reconstructed light sheet images, along with the numerical surface geometry for the model or aircraft under study. A description is provided of the photogrammetric reconstruction technique, and the image processing and computer graphics techniques and equipment. Results of the computer aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images and CFD solutions in the same graphics environment is also demonstrated.
New Techniques for High-Contrast Imaging with ADI: The ACORNS-ADI SEEDS Data Reduction Pipeline
NASA Technical Reports Server (NTRS)
Brandt, Timothy D.; McElwain, Michael W.; Turner, Edwin L.; Abe, L.; Brandner, W.; Carson, J.; Egner, S.; Feldt, M.; Golota, T.; Grady, C. A.;
2012-01-01
We describe Algorithms for Calibration, Optimized Registration, and Nulling the Star in Angular Differential Imaging (ACORNS-ADI), a new, parallelized software package to reduce high-contrast imaging data, and its application to data from the Strategic Exploration of Exoplanets and Disks (SEEDS) survey. We implement seyeral new algorithms, includbg a method to centroid saturated images, a trimmed mean for combining an image sequence that reduces noise by up to approx 20%, and a robust and computationally fast method to compute the sensitivitv of a high-contrast obsen-ation everywhere on the field-of-view without introducing artificial sources. We also include a description of image processing steps to remove electronic artifacts specific to Hawaii2-RG detectors like the one used for SEEDS, and a detailed analysis of the Locally Optimized Combination of Images (LOCI) algorithm commonly used to reduce high-contrast imaging data. ACORNS-ADI is efficient and open-source, and includes several optional features which may improve performance on data from other instruments. ACORNS-ADI is freely available for download at www.github.com/t-brandt/acorns_-adi under a BSD license
NASA Astrophysics Data System (ADS)
Mishra, Deependra K.; Umbaugh, Scott E.; Lama, Norsang; Dahal, Rohini; Marino, Dominic J.; Sackman, Joseph
2016-09-01
CVIPtools is a software package for the exploration of computer vision and image processing developed in the Computer Vision and Image Processing Laboratory at Southern Illinois University Edwardsville. CVIPtools is available in three variants - a) CVIPtools Graphical User Interface, b) CVIPtools C library and c) CVIPtools MATLAB toolbox, which makes it accessible to a variety of different users. It offers students, faculty, researchers and any user a free and easy way to explore computer vision and image processing techniques. Many functions have been implemented and are updated on a regular basis, the library has reached a level of sophistication that makes it suitable for both educational and research purposes. In this paper, the detail list of the functions available in the CVIPtools MATLAB toolbox are presented and how these functions can be used in image analysis and computer vision applications. The CVIPtools MATLAB toolbox allows the user to gain practical experience to better understand underlying theoretical problems in image processing and pattern recognition. As an example application, the algorithm for the automatic creation of masks for veterinary thermographic images is presented.
Computer-aided light sheet flow visualization
NASA Technical Reports Server (NTRS)
Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.
1993-01-01
A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) data sets, was chosen to interactively display the reconstructed light sheet images, along with the numerical surface geometry for the model or aircraft under study. A description is provided of the photogrammetric reconstruction technique, and the image processing and computer graphics techniques and equipment. Results of the computer aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images and CFD solutions in the same graphics environment is also demonstrated.
Rodenacker, K; Aubele, M; Hutzler, P; Adiga, P S
1997-01-01
In molecular pathology numerical chromosome aberrations have been found to be decisive for the prognosis of malignancy in tumours. The existence of such aberrations can be detected by interphase fluorescence in situ hybridization (FISH). The gain or loss of certain base sequences in the desoxyribonucleic acid (DNA) can be estimated by counting the number of FISH signals per cell nucleus. The quantitative evaluation of such events is a necessary condition for a prospective use in diagnostic pathology. To avoid occlusions of signals, the cell nucleus has to be analyzed in three dimensions. Confocal laser scanning microscopy is the means to obtain series of optical thin sections from fluorescence stained or marked material to fulfill the conditions mentioned above. A graphical user interface (GUI) to a software package for display, inspection, count and (semi-)automatic analysis of 3-D images for pathologists is outlined including the underlying methods of 3-D image interaction and segmentation developed. The preparative methods are briefly described. Main emphasis is given to the methodical questions of computer-aided analysis of large 3-D image data sets for pathologists. Several automated analysis steps can be performed for segmentation and succeeding quantification. However tumour material is in contrast to isolated or cultured cells even for visual inspection, a difficult material. For the present a fully automated digital image analysis of 3-D data is not in sight. A semi-automatic segmentation method is thus presented here.
Safety analysis report for packaging (onsite) steel drum
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCormick, W.A.
This Safety Analysis Report for Packaging (SARP) provides the analyses and evaluations necessary to demonstrate that the steel drum packaging system meets the transportation safety requirements of HNF-PRO-154, Responsibilities and Procedures for all Hazardous Material Shipments, for an onsite packaging containing Type B quantities of solid and liquid radioactive materials. The basic component of the steel drum packaging system is the 208 L (55-gal) steel drum.
GUIDOS: tools for the assessment of pattern, connectivity, and fragmentation
NASA Astrophysics Data System (ADS)
Vogt, Peter
2013-04-01
Pattern, connectivity, and fragmentation can be considered as pillars for a quantitative analysis of digital landscape images. The free software toolbox GUIDOS (http://forest.jrc.ec.europa.eu/download/software/guidos) includes a variety of dedicated methodologies for the quantitative assessment of these features. Amongst others, Morphological Spatial Pattern Analysis (MSPA) is used for an intuitive description of image pattern structures and the automatic detection of connectivity pathways. GUIDOS includes tools for the detection and quantitative assessment of key nodes and links as well as to define connectedness in raster images and to setup appropriate input files for an enhanced network analysis using Conefor Sensinode. Finally, fragmentation is usually defined from a species point of view but a generic and quantifiable indicator is needed to measure fragmentation and its changes. Some preliminary results for different conceptual approaches will be shown for a sample dataset. Complemented by pre- and post-processing routines and a complete GIS environment the portable GUIDOS Toolbox may facilitate a holistic assessment in risk assessment studies, landscape planning, and conservation/restoration policies. Alternatively, individual analysis components may contribute to or enhance studies conducted with other software packages in landscape ecology.
Teaching Advanced Data Analysis Tools to High School Astronomy Students
NASA Astrophysics Data System (ADS)
Black, David V.; Herring, Julie; Hintz, Eric G.
2015-01-01
A major barrier to becoming an astronomer is learning how to analyze astronomical data, such as using photometry to compare the brightness of stars. Most fledgling astronomers learn observation, data reduction, and analysis skills through an upper division college class. If the same skills could be taught in an introductory high school astronomy class, then more students would have an opportunity to do authentic science earlier, with implications for how many choose to become astronomers. Several software tools have been developed that can analyze astronomical data ranging from fairly straightforward (AstroImageJ and DS9) to very complex (IRAF and DAOphot). During the summer of 2014, a study was undertaken at Brigham Young University through a Research Experience for Teachers (RET) program to evaluate the effectiveness and ease-of-use of these four software packages. Standard tasks tested included creating a false-color IR image using WISE data in DS9, Adobe Photoshop, and The Gimp; a multi-aperture analyses of variable stars over time using AstroImageJ; creating Spectral Energy Distributions (SEDs) of stars using photometry at multiple wavelengths in AstroImageJ and DS9; and color-magnitude and hydrogen alpha index diagrams for open star clusters using IRAF and DAOphot. Tutorials were then written and combined with screen captures to teach high school astronomy students at Walden School of Liberal Arts in Provo, UT how to perform these same tasks. They analyzed image data using the four software packages, imported it into Microsoft Excel, and created charts using images from BYU's 36-inch telescope at their West Mountain Observatory. The students' attempts to complete these tasks were observed, mentoring was provided, and the students then reported on their experience through a self-reflection essay and concept test. Results indicate that high school astronomy students can successfully complete professional-level astronomy data analyses when given detailed instruction tailored to their experience level along with proper support and mentoring.This project was funded by a grant from the National Science Foundation, Grant # PHY1157078.
Multifrequency data analysis software on STARLINK
NASA Technical Reports Server (NTRS)
Allan, P. M.
1992-01-01
Although the STARLINK project was set up to provide image processing facilities to UK astronomers, it has grown over the last 12 years to the extent that it now provides most of the data analysis facilities for UK astronomers. One aspect of the growth of the STARLINK network is that it now has to cater for astronomers working in a diverse range of wavelengths. Since a given individual may be working with data obtained in a variety of wavelengths, it is most convenient if the data can be stored in a common format and the programs that analyze the data have a similar 'look and feel'. What is known as 'STARLINK software' is obtained from many sources: STARLINK funded programmers; astronomers; foreign projects such as AIPS; generally available shareware; and commercial sources when this proves cost effective. This means that the ideal situation of a completely integrated system cannot be realized in practice. Nevertheless, many of the major packages written by STARLINK application programmers and by astronomers do use a common data format, based on the Hierarchical Data System, so that interchange of data between packages designed separately from each other is simply a matter of using the same file names. For example, as astronomer might use KAPPA to read some optical spectra off a FITS tape, then use CCDPACK to debias and flat field the data (it is easy to set up an overnight batch job to do this if there is a lot of data), then use KAPPA to have a quick look at the data and then use Figaro to reduce the spectra. It is useful to divide data analysis packages into wavelength specific packages, or even instrument specific packages, and general purpose ones. Once the instrumental signature has been removed from some data, any appropriate general purpose package can be used to analyze te data. For example, the ASTERIX package deals with x-ray data reduction, but after dealing with all of the x-ray specific processing, an astronomer may well want to find the brightness of objects in a given frame. Since ASTERIX uses the standard STARLINK data format, the astronomer can use PHOTOM or DAOPHOT 2 to measure the brightness of the objects. Although DAOPHOT was written with optical astronomy in mind, it is useful for analyzing data from several wavelengths. The ability of DAOPHOT 2 to handle non-standard point spread functions can be especially useful in many areas of astronomy.
KAPPA -- Kernel Application Package
NASA Astrophysics Data System (ADS)
Currie, Malcolm J.; Berry, David. S.
KAPPA is an applications package comprising about 180 general-purpose commands for image processing, data visualisation, and manipulation of the standard Starlink data format---the NDF. It is intended to work in conjunction with Starlink's various specialised packages. In addition to the NDF, KAPPA can also process data in other formats by using the `on-the-fly' conversion scheme. Many commands can process data arrays of arbitrary dimension, and others work on both spectra and images. KAPPA operates from both the UNIX C-shell and the ICL command language. This document describes how to use KAPPA and its features. There is some description of techniques too, including a section on writing scripts. This document includes several tutorials and is illustrated with numerous examples. The bulk of this document comprises detailed descriptions of each command as well as classified and alphabetical summaries.
Skills Analysis. Workshop Package on Skills Analysis, Skills Audit and Training Needs Analysis.
ERIC Educational Resources Information Center
Hayton, Geoff; And Others
This four-part package is designed to assist Australian workshop leaders running 2-day workshops on skills analysis, skills audit, and training needs analysis. Part A contains information on how to use the package and a list of workshop aims. Parts B, C, and D consist, respectively, of the workshop leader's guide; overhead transparency sheets and…
Performance of a Diaphragmed Microlens for a Packaged Microspectrometer
Lo, Joe; Chen, Shih-Jui; Fang, Qiyin; Papaioannou, Thanassis; Kim, Eun-Sok; Gundersen, Martin; Marcu, Laura
2009-01-01
This paper describes the design, fabrication, packaging and testing of a microlens integrated in a multi-layered MEMS microspectrometer. The microlens was fabricated using modified PDMS molding to form a suspended lens diaphragm. Gaussian beam propagation model was used to measure the focal length and quantify M2 value of the microlens. A tunable calibration source was set up to measure the response of the packaged device. Dual wavelength separation by the packaged device was demonstrated by CCD imaging and beam profiling of the spectroscopic output. We demonstrated specific techniques to measure critical parameters of microoptics systems for future optimization of spectroscopic devices. PMID:22399943
Performance assessment of small-package-class nonintrusive inspection systems
NASA Astrophysics Data System (ADS)
Spradling, Michael L.; Hyatt, Roger
1997-02-01
The DoD Counterdrug Technology Development Program has addressed the development and demonstration of technology to enhance nonintrusive inspection of small packages such as passenger baggage, commercially delivered parcels, and breakbulk cargo items. Within the past year they have supported several small package-class nonintrusive inspection system performance assessment activities. All performance assessment programs involved the use of a red/blue team concept and were conducted in accordance with approved assessment protocols. This paper presents a discussion related to the systematic performance assessment of small package-class nonintrusive inspection technologies, including transmission, backscatter and computed tomography x-ray imaging, and protocol-related considerations for the assessment of these systems.
Hard X-ray and gamma-ray imaging spectroscopy for the next solar maximum
NASA Technical Reports Server (NTRS)
Hudson, H. S.; Crannell, C. J.; Dennis, B. R.; Spicer, D. S.; Davis, J. M.; Hurford, G. J.; Lin, R. P.
1990-01-01
The objectives and principles are described of a single spectroscopic imaging package that can provide effective imaging in the hard X- and gamma-ray ranges. Called the High-Energy Solar Physics (HESP) mission instrument for solar investigation, the device is based on rotating modulation collimators with germanium semiconductor spectrometers. The instrument is planned to incorporate thick modulation plates, and the range of coverage is discussed. The optics permit the coverage of high-contrast hard X-ray images from small- and medium-sized flares with large signal-to-noise ratios. The detectors allow angular resolution of less than 1 arcsec, time resolution of less than 1 arcsec, and spectral resolution of about 1 keV. The HESP package is considered an effective and important instrument for investigating the high-energy solar events of the near-term future efficiently.
ISLE (Image and Signal Processing LISP Environment) reference manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sherwood, R.J.; Searfus, R.M.
1990-01-01
ISLE is a rapid prototyping system for performing image and signal processing. It is designed to meet the needs of a person doing development of image and signal processing algorithms in a research environment. The image and signal processing modules in ISLE form a very capable package in themselves. They also provide a rich environment for quickly and easily integrating user-written software modules into the package. ISLE is well suited to applications in which there is a need to develop a processing algorithm in an interactive manner. It is straightforward to develop the algorithms, load it into ISLE, apply themore » algorithm to an image or signal, display the results, then modify the algorithm and repeat the develop-load-apply-display cycle. ISLE consists of a collection of image and signal processing modules integrated into a cohesive package through a standard command interpreter. ISLE developer elected to concentrate their effort on developing image and signal processing software rather than developing a command interpreter. A COMMON LISP interpreter was selected for the command interpreter because it already has the features desired in a command interpreter, it supports dynamic loading of modules for customization purposes, it supports run-time parameter and argument type checking, it is very well documented, and it is a commercially supported product. This manual is intended to be a reference manual for the ISLE functions The functions are grouped into a number of categories and briefly discussed in the Function Summary chapter. The full descriptions of the functions and all their arguments are given in the Function Descriptions chapter. 6 refs.« less
Comparison of rotation algorithms for digital images
NASA Astrophysics Data System (ADS)
Starovoitov, Valery V.; Samal, Dmitry
1999-09-01
The paper presents a comparative study of several algorithms developed for digital image rotation. No losing generality we studied gray scale images. We have tested methods preserving gray values of the original images, performing some interpolation and two procedures implemented into the Corel Photo-paint and Adobe Photoshop soft packages. By the similar way methods for rotation of color images may be evaluated also.
The STARLINK software collection
NASA Astrophysics Data System (ADS)
Penny, A. J.; Wallace, P. T.; Sherman, J. C.; Terret, D. L.
1993-12-01
A demonstration will be given of some recent Starlink software. STARLINK is: a network of computers used by UK astronomers; a collection of programs for the calibration and analysis of astronomical data; a team of people giving hardware, software and administrative support. The Starlink Project has been in operation since 1980 to provide UK astronomers with interactive image processing and data reduction facilities. There are now Starlink computer systems at 25 UK locations, serving about 1500 registered users. The Starlink software collection now has about 25 major packages covering a wide range of astronomical data reduction and analysis techniques, as well as many smaller programs and utilities. At the core of most of the packages is a common `software environment', which provides many of the functions which applications need and offers standardized methods of structuring and accessing data. The software environment simplifies programming and support, and makes it easy to use different packages for different stages of the data reduction. Users see a consistent style, and can mix applications without hitting problems of differing data formats. The Project group coordinates the writing and distribution of this software collection, which is Unix based. Outside the UK, Starlink is used at a large number of places, which range from installations at major UK telescopes, which are Starlink-compatible and managed like Starlink sites, to individuals who run only small parts of the Starlink software collection.
Painting a picture across the landscape with ModelMap
Brian Cooke; Elizabeth Freeman; Gretchen Moisen; Tracey Frescino
2017-01-01
Scientists and statisticians working for the Rocky Mountain Research Station have created a software package that simplifies and automates many of the processes needed for converting models into maps. This software package, called ModelMap, has helped a variety of specialists and land managers to quickly convert data into easily understood graphical images. The...
Toyz: A framework for scientific analysis of large datasets and astronomical images
NASA Astrophysics Data System (ADS)
Moolekamp, F.; Mamajek, E.
2015-11-01
As the size of images and data products derived from astronomical data continues to increase, new tools are needed to visualize and interact with that data in a meaningful way. Motivated by our own astronomical images taken with the Dark Energy Camera (DECam) we present Toyz, an open source Python package for viewing and analyzing images and data stored on a remote server or cluster. Users connect to the Toyz web application via a web browser, making it a convenient tool for students to visualize and interact with astronomical data without having to install any software on their local machines. In addition it provides researchers with an easy-to-use tool that allows them to browse the files on a server and quickly view very large images (>2 Gb) taken with DECam and other cameras with a large FOV and create their own visualization tools that can be added on as extensions to the default Toyz framework.
Adams, K R; Niebuhr, S E; Dickson, J S
2015-12-01
The objectives of this study were to determine the dissolved CO2 and O2 concentrations in the purge of vacuum-packaged pork chops over a 60 day storage period, and to elucidate the relationship of dissolved CO2 and O2 to the microbial populations and shelf life. As the populations of spoilage bacteria increased, the dissolved CO2 increased and the dissolved O2 decreased in the purge. Lactic acid bacteria dominated the spoilage microflora, followed by Enterobacteriaceae and Brochothrix thermosphacta. The surface pH decreased to 5.4 due to carbonic acid and lactic acid production before rising to 5.7 due to ammonia production. A mathematical model was developed which estimated microbial populations based on dissolved CO2 concentrations. Scanning electron microscope images were also taken of the packaging film to observe the biofilm development. The SEM images revealed a two-layer biofilm on the packaging film that was the result of the tri-phase growth environment. Copyright © 2015 Elsevier Ltd. All rights reserved.
Patterson, Brian M; Havrilla, George J
2006-11-01
The number of techniques and instruments available for Fourier transform infrared (FT-IR) microspectroscopic imaging has grown significantly over the past few years. Attenuated total internal reflectance (ATR) FT-IR microspectroscopy reduces sample preparation time and has simplified the analysis of many difficult samples. FT-IR imaging has become a powerful analytical tool using either a focal plane array or a linear array detector, especially when coupled with a chemometric analysis package. The field of view of the ATR-IR microspectroscopic imaging area can be greatly increased from 300 x 300 microm to 2500 x 2500 microm using a larger internal reflection element of 12.5 mm radius instead of the typical 1.5 mm radius. This gives an area increase of 70x before aberrant effects become too great. Parameters evaluated include the change in penetration depth as a function of beam displacement, measurements of the active area, magnification factor, and change in spatial resolution over the imaging area. Drawbacks such as large file size will also be discussed. This technique has been successfully applied to the FT-IR imaging of polydimethylsiloxane foam cross-sections, latent human fingerprints, and a model inorganic mixture, which demonstrates the usefulness of the method for pharmaceuticals.
Urban and regional land use analysis: CARETS and census cities experiment package
NASA Technical Reports Server (NTRS)
Alexander, R. (Principal Investigator); Pease, R. W.; Lins, H. F., Jr.
1975-01-01
The author has identified the following significant results. Successful tentative calibration permits computer programs to be written to convert Skylab thermal tapes into line-printed graymaps showing actual surface radiation temperature distributions at the time of imaging. The calibrations will be further checked when atmospheric soundings are available. Success of Skylab calibration suggests that satellite are feasible platforms for thermal scanning and provide a much broader geographical field of view than is possible with airborne platforms.
NASA Technical Reports Server (NTRS)
Djorgovski, S. George
1994-01-01
We developed a package to process and analyze the data from the digital version of the Second Palomar Sky Survey. This system, called SKICAT, incorporates the latest in machine learning and expert systems software technology, in order to classify the detected objects objectively and uniformly, and facilitate handling of the enormous data sets from digital sky surveys and other sources. The system provides a powerful, integrated environment for the manipulation and scientific investigation of catalogs from virtually any source. It serves three principal functions: image catalog construction, catalog management, and catalog analysis. Through use of the GID3* Decision Tree artificial induction software, SKICAT automates the process of classifying objects within CCD and digitized plate images. To exploit these catalogs, the system also provides tools to merge them into a large, complete database which may be easily queried and modified when new data or better methods of calibrating or classifying become available. The most innovative feature of SKICAT is the facility it provides to experiment with and apply the latest in machine learning technology to the tasks of catalog construction and analysis. SKICAT provides a unique environment for implementing these tools for any number of future scientific purposes. Initial scientific verification and performance tests have been made using galaxy counts and measurements of galaxy clustering from small subsets of the survey data, and a search for very high redshift quasars. All of the tests were successful, and produced new and interesting scientific results. Attachments to this report give detailed accounts of the technical aspects for multivariate statistical analysis of small and moderate-size data sets, called STATPROG. The package was tested extensively on a number of real scientific applications, and has produced real, published results.
EVALUATION OF REGISTRATION, COMPRESSION AND CLASSIFICATION ALGORITHMS
NASA Technical Reports Server (NTRS)
Jayroe, R. R.
1994-01-01
Several types of algorithms are generally used to process digital imagery such as Landsat data. The most commonly used algorithms perform the task of registration, compression, and classification. Because there are different techniques available for performing registration, compression, and classification, imagery data users need a rationale for selecting a particular approach to meet their particular needs. This collection of registration, compression, and classification algorithms was developed so that different approaches could be evaluated and the best approach for a particular application determined. Routines are included for six registration algorithms, six compression algorithms, and two classification algorithms. The package also includes routines for evaluating the effects of processing on the image data. This collection of routines should be useful to anyone using or developing image processing software. Registration of image data involves the geometrical alteration of the imagery. Registration routines available in the evaluation package include image magnification, mapping functions, partitioning, map overlay, and data interpolation. The compression of image data involves reducing the volume of data needed for a given image. Compression routines available in the package include adaptive differential pulse code modulation, two-dimensional transforms, clustering, vector reduction, and picture segmentation. Classification of image data involves analyzing the uncompressed or compressed image data to produce inventories and maps of areas of similar spectral properties within a scene. The classification routines available include a sequential linear technique and a maximum likelihood technique. The choice of the appropriate evaluation criteria is quite important in evaluating the image processing functions. The user is therefore given a choice of evaluation criteria with which to investigate the available image processing functions. All of the available evaluation criteria basically compare the observed results with the expected results. For the image reconstruction processes of registration and compression, the expected results are usually the original data or some selected characteristics of the original data. For classification processes the expected result is the ground truth of the scene. Thus, the comparison process consists of determining what changes occur in processing, where the changes occur, how much change occurs, and the amplitude of the change. The package includes evaluation routines for performing such comparisons as average uncertainty, average information transfer, chi-square statistics, multidimensional histograms, and computation of contingency matrices. This collection of routines is written in FORTRAN IV for batch execution and has been implemented on an IBM 360 computer with a central memory requirement of approximately 662K of 8 bit bytes. This collection of image processing and evaluation routines was developed in 1979.
PLATSIM: A Simulation and Analysis Package for Large-Order Flexible Systems. Version 2.0
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Kenny, Sean P.; Giesy, Daniel P.
1997-01-01
The software package PLATSIM provides efficient time and frequency domain analysis of large-order generic space platforms. PLATSIM can perform open-loop analysis or closed-loop analysis with linear or nonlinear control system models. PLATSIM exploits the particular form of sparsity of the plant matrices for very efficient linear and nonlinear time domain analysis, as well as frequency domain analysis. A new, original algorithm for the efficient computation of open-loop and closed-loop frequency response functions for large-order systems has been developed and is implemented within the package. Furthermore, a novel and efficient jitter analysis routine which determines jitter and stability values from time simulations in a very efficient manner has been developed and is incorporated in the PLATSIM package. In the time domain analysis, PLATSIM simulates the response of the space platform to disturbances and calculates the jitter and stability values from the response time histories. In the frequency domain analysis, PLATSIM calculates frequency response function matrices and provides the corresponding Bode plots. The PLATSIM software package is written in MATLAB script language. A graphical user interface is developed in the package to provide convenient access to its various features.
gr-MRI: A software package for magnetic resonance imaging using software defined radios
NASA Astrophysics Data System (ADS)
Hasselwander, Christopher J.; Cao, Zhipeng; Grissom, William A.
2016-09-01
The goal of this work is to develop software that enables the rapid implementation of custom MRI spectrometers using commercially-available software defined radios (SDRs). The developed gr-MRI software package comprises a set of Python scripts, flowgraphs, and signal generation and recording blocks for GNU Radio, an open-source SDR software package that is widely used in communications research. gr-MRI implements basic event sequencing functionality, and tools for system calibrations, multi-radio synchronization, and MR signal processing and image reconstruction. It includes four pulse sequences: a single-pulse sequence to record free induction signals, a gradient-recalled echo imaging sequence, a spin echo imaging sequence, and an inversion recovery spin echo imaging sequence. The sequences were used to perform phantom imaging scans with a 0.5 Tesla tabletop MRI scanner and two commercially-available SDRs. One SDR was used for RF excitation and reception, and the other for gradient pulse generation. The total SDR hardware cost was approximately 2000. The frequency of radio desynchronization events and the frequency with which the software recovered from those events was also measured, and the SDR's ability to generate frequency-swept RF waveforms was validated and compared to the scanner's commercial spectrometer. The spin echo images geometrically matched those acquired using the commercial spectrometer, with no unexpected distortions. Desynchronization events were more likely to occur at the very beginning of an imaging scan, but were nearly eliminated if the user invoked the sequence for a short period before beginning data recording. The SDR produced a 500 kHz bandwidth frequency-swept pulse with high fidelity, while the commercial spectrometer produced a waveform with large frequency spike errors. In conclusion, the developed gr-MRI software can be used to develop high-fidelity, low-cost custom MRI spectrometers using commercially-available SDRs.
Alternative Packaging for Back-Illuminated Imagers
NASA Technical Reports Server (NTRS)
Pain, Bedabrata
2009-01-01
An alternative scheme has been conceived for packaging of silicon-based back-illuminated, back-side-thinned complementary metal oxide/semiconductor (CMOS) and charge-coupled-device image-detector integrated circuits, including an associated fabrication process. This scheme and process are complementary to those described in "Making a Back-Illuminated Imager With Back-Side Connections" (NPO-42839), NASA Tech Briefs, Vol. 32, No. 7 (July 2008), page 38. To avoid misunderstanding, it should be noted that in the terminology of imaging integrated circuits, "front side" or "back side" does not necessarily refer to the side that, during operation, faces toward or away from a source of light or other object to be imaged. Instead, "front side" signifies that side of a semiconductor substrate upon which the pixel pattern and the associated semiconductor devices and metal conductor lines are initially formed during fabrication, and "back side" signifies the opposite side. If the imager is of the type called "back-illuminated," then the back side is the one that faces an object to be imaged. Initially, a back-illuminated, back-side-thinned image-detector is fabricated with its back side bonded to a silicon handle wafer. At a subsequent stage of fabrication, the front side is bonded to a glass wafer (for mechanical support) and the silicon handle wafer is etched away to expose the back side. The frontside integrated circuitry includes metal input/output contact pads, which are rendered inaccessible by the bonding of the front side to the glass wafer. Hence, one of the main problems is to make the input/output contact pads accessible from the back side, which is ultimately to be the side accessible to the external world. The present combination of an alternative packaging scheme and associated fabrication process constitute a solution of the problem.
Multiple-Group Analysis Using the sem Package in the R System
ERIC Educational Resources Information Center
Evermann, Joerg
2010-01-01
Multiple-group analysis in covariance-based structural equation modeling (SEM) is an important technique to ensure the invariance of latent construct measurements and the validity of theoretical models across different subpopulations. However, not all SEM software packages provide multiple-group analysis capabilities. The sem package for the R…
ImageParser: a tool for finite element generation from three-dimensional medical images
Yin, HM; Sun, LZ; Wang, G; Yamada, T; Wang, J; Vannier, MW
2004-01-01
Background The finite element method (FEM) is a powerful mathematical tool to simulate and visualize the mechanical deformation of tissues and organs during medical examinations or interventions. It is yet a challenge to build up an FEM mesh directly from a volumetric image partially because the regions (or structures) of interest (ROIs) may be irregular and fuzzy. Methods A software package, ImageParser, is developed to generate an FEM mesh from 3-D tomographic medical images. This software uses a semi-automatic method to detect ROIs from the context of image including neighboring tissues and organs, completes segmentation of different tissues, and meshes the organ into elements. Results The ImageParser is shown to build up an FEM model for simulating the mechanical responses of the breast based on 3-D CT images. The breast is compressed by two plate paddles under an overall displacement as large as 20% of the initial distance between the paddles. The strain and tangential Young's modulus distributions are specified for the biomechanical analysis of breast tissues. Conclusion The ImageParser can successfully exact the geometry of ROIs from a complex medical image and generate the FEM mesh with customer-defined segmentation information. PMID:15461787
Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P
2015-03-01
Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical. Published by Elsevier B.V.
Neural Implants, Packaging for Biocompatible Implants, and Improving Fabricated Capacitors
NASA Astrophysics Data System (ADS)
Agger, Elizabeth Rose
We have completed the circuit design and packaging procedure for an NIH-funded neural implant, called a MOTE (Microscale Optoelectronically Transduced Electrode). Neural recording implants for mice have greatly advanced neuroscience, but they are often damaging and limited in their recording location. This project will result in free-floating implants that cause less damage, provide rapid electronic recording, and increase range of recording across the cortex. A low-power silicon IC containing amplification and digitization sub-circuits is powered by a dual-function gallium arsenide photovoltaic and LED. Through thin film deposition, photolithography, and chemical and physical etching, the Molnar Group and the McEuen Group (Applied and Engineering Physics department) will package the IC and LED into a biocompatible implant approximately 100microm3. The IC and LED are complete and we have begun refining this packaging procedure in the Cornell NanoScale Science & Technology Facility. ICs with 3D time-resolved imaging capabilities can image microorganisms and other biological samples given proper packaging. A portable, flat, easily manufactured package would enable scientists to place biological samples on slides directly above the Molnar group's imaging chip. We have developed a packaging procedure using laser cutting, photolithography, epoxies, and metal deposition. Using a flip-chip method, we verified the process by aligning and adhering a sample chip to a holder wafer. In the CNF, we have worked on a long-term metal-insulator-metal (MIM) capacitor characterization project. Former Fellow and continuing CNF user Kwame Amponsah developed the original procedure for the capacitor fabrication, and another former fellow, Jonilyn Longenecker, revised the procedure and began the arduous process of characterization. MIM caps are useful to clean room users as testing devices to verify electronic characteristics of their active circuitry. This project's objective is to determine differences in current-voltage (IV) and capacitor-voltage (CV) relationships across variations in capacitor size and dielectric type. This effort requires an approximately 20-step process repeated for two-to-six varieties (dependent on temperature and thermal versus plasma options) of the following dielectrics: HfO2, SiO2, Al2O3, TaOx, and TiO2.
User's manual for the coupled rotor/airframe vibration analysis graphic package
NASA Technical Reports Server (NTRS)
Studwell, R. E.
1982-01-01
User instructions for a graphics package for coupled rotor/airframe vibration analysis are presented. Responses to plot package messages which the user must make to activate plot package operations and options are described. Installation instructions required to set up the program on the CDC system are included. The plot package overlay structure and subroutines which have to be modified for the CDC system are also described. Operating instructions for CDC applications are included.
3D GeoWall Analysis System for Shuttle External Tank Foreign Object Debris Events
NASA Technical Reports Server (NTRS)
Brown, Richard; Navard, Andrew; Spruce, Joseph
2010-01-01
An analytical, advanced imaging method has been developed for the initial monitoring and identification of foam debris and similar anomalies that occur post-launch in reference to the space shuttle s external tank (ET). Remote sensing technologies have been used to perform image enhancement and analysis on high-resolution, true-color images collected with the DCS 760 Kodak digital camera located in the right umbilical well of the space shuttle. Improvements to the camera, using filters, have added sharpness/definition to the image sets; however, image review/analysis of the ET has been limited by the fact that the images acquired by umbilical cameras during launch are two-dimensional, and are usually nonreferenceable between frames due to rotation translation of the ET as it falls away from the space shuttle. Use of stereo pairs of these images can enable strong visual indicators that can immediately portray depth perception of damaged areas or movement of fragments between frames is not perceivable in two-dimensional images. A stereoscopic image visualization system has been developed to allow 3D depth perception of stereo-aligned image pairs taken from in-flight umbilical and handheld digital shuttle cameras. This new system has been developed to augment and optimize existing 2D monitoring capabilities. Using this system, candidate sequential image pairs are identified for transformation into stereo viewing pairs. Image orientation is corrected using control points (similar points) between frames to place the two images in proper X-Y viewing perspective. The images are then imported into the WallView stereo viewing software package. The collected control points are used to generate a transformation equation that is used to re-project one image and effectively co-register it to the other image. The co-registered, oriented image pairs are imported into a WallView image set and are used as a 3D stereo analysis slide show. Multiple sequential image pairs can be used to allow forensic review of temporal phenomena between pairs. The observer, while wearing linear polarized glasses, is able to review image pairs in passive 3D stereo.
Back-illuminated CCD imager adapted for contrast transfer function measurements thereon
NASA Technical Reports Server (NTRS)
Levine, Peter A. (Inventor)
1987-01-01
Stripe patterns of varying spatial frequency, formed in the top-metalization of a back-illuminated solid-state imager, facilitate on-line measurement of contrast transfer function during wafer-probe testing. The imager may be packaged to allow front-illumination during in-the-field testing after its manufacture.
NASA Astrophysics Data System (ADS)
Lu, Hong; Gargesha, Madhusudhana; Wang, Zhao; Chamie, Daniel; Attizani, Guilherme F.; Kanaya, Tomoaki; Ray, Soumya; Costa, Marco A.; Rollins, Andrew M.; Bezerra, Hiram G.; Wilson, David L.
2013-02-01
Intravascular OCT (iOCT) is an imaging modality with ideal resolution and contrast to provide accurate in vivo assessments of tissue healing following stent implantation. Our Cardiovascular Imaging Core Laboratory has served >20 international stent clinical trials with >2000 stents analyzed. Each stent requires 6-16hrs of manual analysis time and we are developing highly automated software to reduce this extreme effort. Using classification technique, physically meaningful image features, forward feature selection to limit overtraining, and leave-one-stent-out cross validation, we detected stent struts. To determine tissue coverage areas, we estimated stent "contours" by fitting detected struts and interpolation points from linearly interpolated tissue depths to a periodic cubic spline. Tissue coverage area was obtained by subtracting lumen area from the stent area. Detection was compared against manual analysis of 40 pullbacks. We obtained recall = 90+/-3% and precision = 89+/-6%. When taking struts deemed not bright enough for manual analysis into consideration, precision improved to 94+/-6%. This approached inter-observer variability (recall = 93%, precision = 96%). Differences in stent and tissue coverage areas are 0.12 +/- 0.41 mm2 and 0.09 +/- 0.42 mm2, respectively. We are developing software which will enable visualization, review, and editing of automated results, so as to provide a comprehensive stent analysis package. This should enable better and cheaper stent clinical trials, so that manufacturers can optimize the myriad of parameters (drug, coverage, bioresorbable versus metal, etc.) for stent design.
Mahmood, Feroze; Karthik, Swaminathan; Subramaniam, Balachundhar; Panzica, Peter J; Mitchell, John; Lerner, Adam B; Jervis, Karinne; Maslow, Andrew D
2008-04-01
To study the feasibility of using 3-dimensional (3D) echocardiography in the operating room for mitral valve repair or replacement surgery. To perform geometric analysis of the mitral valve before and after repair. Prospective observational study. Academic, tertiary care hospital. Consecutive patients scheduled for mitral valve surgery. Intraoperative reconstruction of 3D images of the mitral valve. One hundred and two patients had 3D analysis of their mitral valve. Successful image reconstruction was performed in 93 patients-8 patients had arrhythmias or a dilated mitral valve annulus resulting in significant artifacts. Time from acquisition to reconstruction and analysis was less than 5 minutes. Surgeon identification of mitral valve anatomy was 100% accurate. The study confirms the feasibility of performing intraoperative 3D reconstruction of the mitral valve. This data can be used for confirmation and communication of 2-dimensional data to the surgeons by obtaining a surgical view of the mitral valve. The incorporation of color-flow Doppler into these 3D images helps in identification of the commissural or perivalvular location of regurgitant orifice. With improvements in the processing power of the current generation of echocardiography equipment, it is possible to quickly acquire, reconstruct, and manipulate images to help with timely diagnosis and surgical planning.
XCAT/DRASIM: a realistic CT/human-model simulation package
NASA Astrophysics Data System (ADS)
Fung, George S. K.; Stierstorfer, Karl; Segars, W. Paul; Taguchi, Katsuyuki; Flohr, Thomas G.; Tsui, Benjamin M. W.
2011-03-01
The aim of this research is to develop a complete CT/human-model simulation package by integrating the 4D eXtended CArdiac-Torso (XCAT) phantom, a computer generated NURBS surface based phantom that provides a realistic model of human anatomy and respiratory and cardiac motions, and the DRASIM (Siemens Healthcare) CT-data simulation program. Unlike other CT simulation tools which are based on simple mathematical primitives or voxelized phantoms, this new simulation package has the advantages of utilizing a realistic model of human anatomy and physiological motions without voxelization and with accurate modeling of the characteristics of clinical Siemens CT systems. First, we incorporated the 4D XCAT anatomy and motion models into DRASIM by implementing a new library which consists of functions to read-in the NURBS surfaces of anatomical objects and their overlapping order and material properties in the XCAT phantom. Second, we incorporated an efficient ray-tracing algorithm for line integral calculation in DRASIM by computing the intersection points of the rays cast from the x-ray source to the detector elements through the NURBS surfaces of the multiple XCAT anatomical objects along the ray paths. Third, we evaluated the integrated simulation package by performing a number of sample simulations of multiple x-ray projections from different views followed by image reconstruction. The initial simulation results were found to be promising by qualitative evaluation. In conclusion, we have developed a unique CT/human-model simulation package which has great potential as a tool in the design and optimization of CT scanners, and the development of scanning protocols and image reconstruction methods for improving CT image quality and reducing radiation dose.
clusterProfiler: an R package for comparing biological themes among gene clusters.
Yu, Guangchuang; Wang, Li-Gen; Han, Yanyan; He, Qing-Yu
2012-05-01
Increasing quantitative data generated from transcriptomics and proteomics require integrative strategies for analysis. Here, we present an R package, clusterProfiler that automates the process of biological-term classification and the enrichment analysis of gene clusters. The analysis module and visualization module were combined into a reusable workflow. Currently, clusterProfiler supports three species, including humans, mice, and yeast. Methods provided in this package can be easily extended to other species and ontologies. The clusterProfiler package is released under Artistic-2.0 License within Bioconductor project. The source code and vignette are freely available at http://bioconductor.org/packages/release/bioc/html/clusterProfiler.html.
Seo, Yeong-Hyeon; Hwang, Kyungmin; Jeong, Ki-Hun
2018-02-19
We report a 1.65 mm diameter forward-viewing confocal endomicroscopic catheter using a flip-chip bonded electrothermal MEMS fiber scanner. Lissajous scanning was implemented by the electrothermal MEMS fiber scanner. The Lissajous scanned MEMS fiber scanner was precisely fabricated to facilitate flip-chip connection, and bonded with a printed circuit board. The scanner was successfully combined with a fiber-based confocal imaging system. A two-dimensional reflectance image of the metal pattern 'OPTICS' was successfully obtained with the scanner. The flip-chip bonded scanner minimizes electrical packaging dimensions. The inner diameter of the flip-chip bonded MEMS fiber scanner is 1.3 mm. The flip-chip bonded MEMS fiber scanner is fully packaged with a 1.65 mm diameter housing tube, 1 mm diameter GRIN lens, and a single mode optical fiber. The packaged confocal endomicroscopic catheter can provide a new breakthrough for diverse in-vivo endomicroscopic applications.
Poppr: an R package for genetic analysis of populations with mixed (clonal/sexual) reproduction
USDA-ARS?s Scientific Manuscript database
Poppr is an R package for analysis of population genetic data. It extends the adegenet package and provides several novel tools, particularly with regard to analysis of data from admixed, clonal, and/or sexual populations. Currently, poppr can be used for dominant/codominant and haploid/diploid gene...
Auer, Tibor; Churchill, Nathan W.; Flandin, Guillaume; Guntupalli, J. Swaroop; Raffelt, David; Quirion, Pierre-Olivier; Smith, Robert E.; Strother, Stephen C.; Varoquaux, Gaël
2017-01-01
The rate of progress in human neurosciences is limited by the inability to easily apply a wide range of analysis methods to the plethora of different datasets acquired in labs around the world. In this work, we introduce a framework for creating, testing, versioning and archiving portable applications for analyzing neuroimaging data organized and described in compliance with the Brain Imaging Data Structure (BIDS). The portability of these applications (BIDS Apps) is achieved by using container technologies that encapsulate all binary and other dependencies in one convenient package. BIDS Apps run on all three major operating systems with no need for complex setup and configuration and thanks to the comprehensiveness of the BIDS standard they require little manual user input. Previous containerized data processing solutions were limited to single user environments and not compatible with most multi-tenant High Performance Computing systems. BIDS Apps overcome this limitation by taking advantage of the Singularity container technology. As a proof of concept, this work is accompanied by 22 ready to use BIDS Apps, packaging a diverse set of commonly used neuroimaging algorithms. PMID:28278228
Paskevich, Valerie F.
1992-01-01
The Branch of Atlantic Marine Geology has been involved in the collection, processing and digital mosaicking of high, medium and low-resolution side-scan sonar data during the past 6 years. In the past, processing and digital mosaicking has been accomplished with a dedicated, shore-based computer system. With the need to process sidescan data in the field with increased power and reduced cost of major workstations, a need to have an image processing package on a UNIX based computer system which could be utilized in the field as well as be more generally available to Branch personnel was identified. This report describes the initial development of that package referred to as the Woods Hole Image Processing System (WHIPS). The software was developed using the Unidata NetCDF software interface to allow data to be more readily portable between different computer operating systems.
Clark, Robin A; Shoaib, Mohammed; Hewitt, Katherine N; Stanford, S Clare; Bate, Simon T
2012-08-01
InVivoStat is a free-to-use statistical software package for analysis of data generated from animal experiments. The package is designed specifically for researchers in the behavioural sciences, where exploiting the experimental design is crucial for reliable statistical analyses. This paper compares the analysis of three experiments conducted using InVivoStat with other widely used statistical packages: SPSS (V19), PRISM (V5), UniStat (V5.6) and Statistica (V9). We show that InVivoStat provides results that are similar to those from the other packages and, in some cases, are more advanced. This investigation provides evidence of further validation of InVivoStat and should strengthen users' confidence in this new software package.
NASA Astrophysics Data System (ADS)
Hoeller, Timothy
2007-06-01
Samples of EVOH films from compositions of 29 - 44 mol% ethylene content were exposed to thermal aging with and without light exposure. The results of Dielectric Spectroscopy on select samples showed Cole-Cole plots of skewed dielectric constant indicating multiple distributions of dipole relaxation times. The onset for decreases in dielectric response occurs earlier in samples exposed to elevated temperature under light exposure. Lower permittivity is exhibited in samples of higher ethylene content. Results from heat exposed samples are presented. Colorimetric analysis indicates only a slight film yellowing in one case. Raman spectroscopy on untreated films discerns changes in the C-C-O stretch associated with the alcohol. The effects of aging on microstructure may cause hindrance of molecular motion from moisture desorption. Slight material degradation occurs from film hardening presumably due to crosslinking. An electrical circuit model of the conduction processes associated with the EVOH films is presented. Dielectric analysis shows promise for monitoring material changes related to deterioration. We are also using these methods to understand Fluorescence Imaging which has been recently released for paper and plastic materials analysis. Future work may include refinement of these techniques for identification of changes in material properties correlated to packaging material barrier resistance.
Assessment of Automated Analyses of Cell Migration on Flat and Nanostructured Surfaces
Grădinaru, Cristian; Łopacińska, Joanna M.; Huth, Johannes; Kestler, Hans A.; Flyvbjerg, Henrik; Mølhave, Kristian
2012-01-01
Motility studies of cells often rely on computer software that analyzes time-lapse recorded movies and establishes cell trajectories fully automatically. This raises the question of reproducibility of results, since different programs could yield significantly different results of such automated analysis. The fact that the segmentation routines of such programs are often challenged by nanostructured surfaces makes the question more pertinent. Here we illustrate how it is possible to track cells on bright field microscopy images with image analysis routines implemented in an open-source cell tracking program, PACT (Program for Automated Cell Tracking). We compare the automated motility analysis of three cell tracking programs, PACT, Autozell, and TLA, using the same movies as input for all three programs. We find that different programs track overlapping, but different subsets of cells due to different segmentation methods. Unfortunately, population averages based on such different cell populations, differ significantly in some cases. Thus, results obtained with one software package are not necessarily reproducible by other software. PMID:24688640
Perceptions of branded and plain cigarette packaging among Mexican youth.
Mutti, Seema; Hammond, David; Reid, Jessica L; White, Christine M; Thrasher, James F
2017-08-01
Plain cigarette packaging, which seeks to remove all brand imagery and standardize the shape and size of cigarette packs, represents a novel policy measure to reduce the appeal of cigarettes. Plain packaging has been studied primarily in high-income countries like Australia and the UK. It is unknown whether the effects of plain packaging may differ in low-and-middle income countries with a shorter history of tobacco regulation, such as Mexico. An experimental study was conducted in Mexico City to examine perceptions of branded and plain cigarette packaging among smoking and non-smoking Mexican adolescents (n = 359). Respondents were randomly assigned to a branded or plain pack condition and rated 12 cigarette packages for appeal, taste, harm to health and smoker-image traits. As a behavioral measure of appeal, respondents were offered (although not given) four cigarette packs (either branded or plain) and asked to select one to keep. The findings indicated that branded packs were perceived to be more appealing (β = 3.40, p < 0.001) and to contain better tasting cigarettes (β = 3.53, p < 0.001), but were not perceived as less harmful than plain packs. Participants rated people who smoke the branded packs as having relatively more positive smoker-image traits overall (β = 2.10, p < 0.001), with particularly strong differences found among non-smokers for the traits 'glamorous', 'stylish', 'popular' and 'sophisticated' (p < 0.001). No statistically significant difference was found for the proportion of youth that accepted when offered branded compared with plain packs. These results suggest that plain packaging may reduce brand appeal among Mexican youth, consistent with findings in high-income countries. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
New Mexico Play Fairway Analysis: Particle Tracking ArcGIS Map Packages
Jeff Pepin
2015-11-15
These are map packages used to visualize geochemical particle-tracking analysis results in ArcGIS. It includes individual map packages for several regions of New Mexico including: Acoma, Rincon, Gila, Las Cruces, Socorro and Truth or Consequences.
Middleton, Mark; Frantzis, Jim; Healy, Brendan; Jones, Mark; Murry, Rebecca; Kron, Tomas; Plank, Ashley; Catton, Charles; Martin, Jarad
2011-12-01
The quality assurance (QA) of image-guided radiation therapy (IGRT) within clinical trials is in its infancy, but its importance will continue to grow as IGRT becomes the standard of care. The purpose of this study was to demonstrate the feasibility of IGRT QA as part of the credentialing process for a clinical trial. As part of the accreditation process for a randomized trial in prostate cancer hypofraction, IGRT benchmarking across multiple sites was incorporated. Each participating site underwent IGRT credentialing via a site visit. In all centers, intraprostatic fiducials were used. A real-time assessment of analysis of IGRT was performed using Varian's Offline Review image analysis package. Two-dimensional (2D) kV and MV electronic portal imaging prostate patient datasets were used, consisting of 39 treatment verification images for 2D/2D comparison with the digitally reconstructed radiograph derived from the planning scan. The influence of differing sites, image modality, and observer experience on IGRT was then assessed. Statistical analysis of the mean mismatch errors showed that IGRT analysis was performed uniformly regardless of institution, therapist seniority, or imaging modality across the three orthogonal planes. The IGRT component of clinical trials that include sophisticated planning and treatment protocols must undergo stringent QA. The IGRT technique of intraprostatic fiducials has been shown in the context of this trial to be undertaken in a uniform manner across Australia. Extending this concept to many sites with different equipment and IGRT experience will require a robust remote credentialing process. Crown Copyright © 2011. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
1992-01-01
To convert raw data into environmental products, the National Weather Service and other organizations use the Global 9000 image processing system marketed by Global Imaging, Inc. The company's GAE software package is an enhanced version of the TAE, developed by Goddard Space Flight Center to support remote sensing and image processing applications. The system can be operated in three modes and is combined with HP Apollo workstation hardware.
Chimenea and other tools: Automated imaging of multi-epoch radio-synthesis data with CASA
NASA Astrophysics Data System (ADS)
Staley, T. D.; Anderson, G. E.
2015-11-01
In preparing the way for the Square Kilometre Array and its pathfinders, there is a pressing need to begin probing the transient sky in a fully robotic fashion using the current generation of radio telescopes. Effective exploitation of such surveys requires a largely automated data-reduction process. This paper introduces an end-to-end automated reduction pipeline, AMIsurvey, used for calibrating and imaging data from the Arcminute Microkelvin Imager Large Array. AMIsurvey makes use of several component libraries which have been packaged separately for open-source release. The most scientifically significant of these is chimenea, which implements a telescope-agnostic algorithm for automated imaging of pre-calibrated multi-epoch radio-synthesis data, of the sort typically acquired for transient surveys or follow-up. The algorithm aims to improve upon standard imaging pipelines by utilizing iterative RMS-estimation and automated source-detection to avoid so called 'Clean-bias', and makes use of CASA subroutines for the underlying image-synthesis operations. At a lower level, AMIsurvey relies upon two libraries, drive-ami and drive-casa, built to allow use of mature radio-astronomy software packages from within Python scripts. While targeted at automated imaging, the drive-casa interface can also be used to automate interaction with any of the CASA subroutines from a generic Python process. Additionally, these packages may be of wider technical interest beyond radio-astronomy, since they demonstrate use of the Python library pexpect to emulate terminal interaction with an external process. This approach allows for rapid development of a Python interface to any legacy or externally-maintained pipeline which accepts command-line input, without requiring alterations to the original code.
Chen, Lung-Tai; Chang, Jin-Sheng; Hsu, Chung-Yi; Cheng, Wood-Hi
2009-01-01
A novel plastic packaging of a piezoresistive pressure sensor using a patterned ultra-thick photoresist is experimentally and theoretically investigated. Two pressure sensor packages of the sacrifice-replacement and dam-ring type were used in this study. The characteristics of the packaged pressure sensors were investigated by using a finite-element (FE) model and experimental measurements. The results show that the thermal signal drift of the packaged pressure sensor with a small sensing-channel opening or with a thin silicon membrane for the dam-ring approach had a high packaging induced thermal stress, leading to a high temperature coefficient of span (TCO) response of −0.19% span/°C. The results also show that the thermal signal drift of the packaged pressure sensors with a large sensing-channel opening for sacrifice-replacement approach significantly reduced packaging induced thermal stress, and hence a low TCO response of −0.065% span/°C. However, the packaged pressure sensors of both the sacrifice-replacement and dam-ring type still met the specification −0.2% span/°C of the unpackaged pressure sensor. In addition, the size of proposed packages was 4 × 4 × 1.5 mm3 which was about seven times less than the commercialized packages. With the same packaging requirement, the proposed packaging approaches may provide an adequate solution for use in other open-cavity sensors, such as gas sensors, image sensors, and humidity sensors. PMID:22454580
PresenceAbsence: An R package for presence absence analysis
Elizabeth A. Freeman; Gretchen Moisen
2008-01-01
The PresenceAbsence package for R provides a set of functions useful when evaluating the results of presence-absence analysis, for example, models of species distribution or the analysis of diagnostic tests. The package provides a toolkit for selecting the optimal threshold for translating a probability surface into presence-absence maps specifically tailored to their...
NASA Astrophysics Data System (ADS)
Karaszi, Zoltan; Konya, Andrew; Dragan, Feodor; Jakli, Antal; CPIP/LCI; CS Dept. of Kent State University Collaboration
Polarizing optical microscopy (POM) is traditionally the best-established method of studying liquid crystals, and using POM started already with Otto Lehman in 1890. An expert, who is familiar with the science of optics of anisotropic materials and typical textures of liquid crystals, can identify phases with relatively large confidence. However, for unambiguous identification usually other expensive and time-consuming experiments are needed. Replacement of the subjective and qualitative human eye-based liquid crystal texture analysis with quantitative computerized image analysis technique started only recently and were used to enhance the detection of smooth phase transitions, determine order parameter and birefringence of specific liquid crystal phases. We investigate if the computer can recognize and name the phase where the texture was taken. To judge the potential of reliable image recognition based on this procedure, we used 871 images of liquid crystal textures belonging to five main categories: Nematic, Smectic A, Smectic C, Cholesteric and Crystal, and used a Neural Network Clustering Technique included in the data mining software package in Java ``WEKA''. A neural network trained on a set of 827 LC textures classified the remaining 44 textures with 80% accuracy.
Mechanical Design of the LSST Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nordby, Martin; Bowden, Gordon; Foss, Mike
2008-06-13
The LSST camera is a tightly packaged, hermetically-sealed system that is cantilevered into the main beam of the LSST telescope. It is comprised of three refractive lenses, on-board storage for five large filters, a high-precision shutter, and a cryostat that houses the 3.2 giga-pixel CCD focal plane along with its support electronics. The physically large optics and focal plane demand large structural elements to support them, but the overall size of the camera and its components must be minimized to reduce impact on the image stability. Also, focal plane and optics motions must be minimized to reduce systematic errors inmore » image reconstruction. Design and analysis for the camera body and cryostat will be detailed.« less
MODEL 9977 B(M)F-96 SAFETY ANALYSIS REPORT FOR PACKAGING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abramczyk, G; Paul Blanton, P; Kurt Eberl, K
2006-05-18
This Safety Analysis Report for Packaging (SARP) documents the analysis and testing performed on and for the 9977 Shipping Package, referred to as the General Purpose Fissile Package (GPFP). The performance evaluation presented in this SARP documents the compliance of the 9977 package with the regulatory safety requirements for Type B packages. Per 10 CFR 71.59, for the 9977 packages evaluated in this SARP, the value of ''N'' is 50, and the Transport Index based on nuclear criticality control is 1.0. The 9977 package is designed with a high degree of single containment. The 9977 complies with 10 CFR 71more » (2002), Department of Energy (DOE) Order 460.1B, DOE Order 460.2, and 10 CFR 20 (2003) for As Low As Reasonably Achievable (ALARA) principles. The 9977 also satisfies the requirements of the Regulations for the Safe Transport of Radioactive Material--1996 Edition (Revised)--Requirements. IAEA Safety Standards, Safety Series No. TS-R-1 (ST-1, Rev.), International Atomic Energy Agency, Vienna, Austria (2000). The 9977 package is designed, analyzed and fabricated in accordance with Section III of the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel (B&PV) Code, 1992 edition.« less
NASA Technical Reports Server (NTRS)
1998-01-01
PixelVision, Inc., has developed a series of integrated imaging engines capable of high-resolution image capture at dynamic speeds. This technology was used originally at Jet Propulsion Laboratory in a series of imaging engines for a NASA mission to Pluto. By producing this integrated package, Charge-Coupled Device (CCD) technology has been made accessible to a wide range of users.
Extremely High-Frequency Holographic Radar Imaging of Personnel and Mail
DOE Office of Scientific and Technical Information (OSTI.GOV)
McMakin, Douglas L.; Sheen, David M.; Griffin, Jeffrey W.
2006-08-01
The awareness of terrorists covertly transporting chemical warfare (CW) and biological warfare (BW) agents into government, military, and civilian facilities to harm the occupants has increased dramatically since the attacks of 9/11. Government and civilian security personnel have a need for innovative surveillance technology that can rapidly detect these lethal agents, even when they are hidden away in sealed containers and concealed either under clothing or in hand-carried items such as mailed packages or handbags. Sensor technology that detects BW and CW agents in mail or sealed containers carried under the clothing are under development. One promising sensor technology presentlymore » under development to defeat these threats is active millimeter-wave holographic radar imaging, which can readily image concealed items behind paper, cardboard, and clothing. Feasibility imaging studies at frequencies greater than 40 GHz have been conducted to determine whether simulated biological or chemical agents concealed in mail packages or under clothing could be detected using this extremely high-frequency imaging technique. The results of this imaging study will be presented in this paper.« less
Ng, David C; Tamura, Hideki; Tokuda, Takashi; Yamamoto, Akio; Matsuo, Masamichi; Nunoshita, Masahiro; Ishikawa, Yasuyuki; Shiosaka, Sadao; Ohta, Jun
2006-09-30
The aim of the present study is to demonstrate the application of complementary metal-oxide semiconductor (CMOS) imaging technology for studying the mouse brain. By using a dedicated CMOS image sensor, we have successfully imaged and measured brain serine protease activity in vivo, in real-time, and for an extended period of time. We have developed a biofluorescence imaging device by packaging the CMOS image sensor which enabled on-chip imaging configuration. In this configuration, no optics are required whereby an excitation filter is applied onto the sensor to replace the filter cube block found in conventional fluorescence microscopes. The fully packaged device measures 350 microm thick x 2.7 mm wide, consists of an array of 176 x 144 pixels, and is small enough for measurement inside a single hemisphere of the mouse brain, while still providing sufficient imaging resolution. In the experiment, intraperitoneally injected kainic acid induced upregulation of serine protease activity in the brain. These events were captured in real time by imaging and measuring the fluorescence from a fluorogenic substrate that detected this activity. The entire device, which weighs less than 1% of the body weight of the mouse, holds promise for studying freely moving animals.
Wide-Field Imaging Telescope-0 (WIT0) with automatic observing system
NASA Astrophysics Data System (ADS)
Ji, Tae-Geun; Byeon, Seoyeon; Lee, Hye-In; Park, Woojin; Lee, Sang-Yun; Hwang, Sungyong; Choi, Changsu; Gibson, Coyne Andrew; Kuehne, John W.; Prochaska, Travis; Marshall, Jennifer L.; Im, Myungshin; Pak, Soojong
2018-01-01
We introduce Wide-Field Imaging Telescope-0 (WIT0), with an automatic observing system. It is developed for monitoring the variabilities of many sources at a time, e.g. young stellar objects and active galactic nuclei. It can also find the locations of transient sources such as a supernova or gamma-ray bursts. In 2017 February, we installed the wide-field 10-inch telescope (Takahashi CCA-250) as a piggyback system on the 30-inch telescope at the McDonald Observatory in Texas, US. The 10-inch telescope has a 2.35 × 2.35 deg field-of-view with a 4k × 4k CCD Camera (FLI ML16803). To improve the observational efficiency of the system, we developed a new automatic observing software, KAOS30 (KHU Automatic Observing Software for McDonald 30-inch telescope), which was developed by Visual C++ on the basis of a windows operating system. The software consists of four control packages: the Telescope Control Package (TCP), the Data Acquisition Package (DAP), the Auto Focus Package (AFP), and the Script Mode Package (SMP). Since it also supports the instruments that are using the ASCOM driver, the additional hardware installations become quite simplified. We commissioned KAOS30 in 2017 August and are in the process of testing. Based on the WIT0 experiences, we will extend KAOS30 to control multiple telescopes in future projects.
Focal plane instrument for the Solar UV-Vis-IR Telescope aboard SOLAR-C
NASA Astrophysics Data System (ADS)
Katsukawa, Yukio; Suematsu, Yoshinori; Shimizu, Toshifumi; Ichimoto, Kiyoshi; Takeyama, Norihide
2011-10-01
It is presented the conceptual design of a focal plane instrument for the Solar UV-Vis-IR Telescope (SUVIT) aboard the next Japanese solar mission SOLAR-C. A primary purpose of the telescope is to achieve precise as well as high resolution spectroscopic and polarimetric measurements of the solar chromosphere with a big aperture of 1.5 m, which is expected to make a significant progress in understanding basic MHD processes in the solar atmosphere. The focal plane instrument consists of two packages: A filtergraph package is to get not only monochromatic images but also Dopplergrams and magnetograms using a tunable narrow-band filter and interference filters. A spectrograph package is to perform accurate spectro-polarimetric observations for measuring chromospheric magnetic fields, and is employing a Littrow-type spectrograph. The most challenging aspect in the instrument design is wide wavelength coverage from 280 nm to 1.1 μm to observe multiple chromospheric lines, which is to be realized with a lens unit including fluoride glasses. A high-speed camera for correlation tracking of granular motion is also implemented in one of the packages for an image stabilization system, which is essential to achieve high spatial resolution and high polarimetric accuracy.
The cigarette pack as image: new evidence from tobacco industry documents
Wakefield, M; Morley, C; Horan, J; Cummings, K
2002-01-01
Methods: A search of tobacco company document sites using a list of specified search terms was undertaken during November 2000 to July 2001. Results: Documents show that, especially in the context of tighter restrictions on conventional avenues for tobacco marketing, tobacco companies view cigarette packaging as an integral component of marketing strategy and a vehicle for (a) creating significant in-store presence at the point of purchase, and (b) communicating brand image. Market testing results indicate that such imagery is so strong as to influence smoker's taste ratings of the same cigarettes when packaged differently. Documents also reveal the careful balancing act that companies have employed in using pack design and colour to communicate the impression of lower tar or milder cigarettes, while preserving perceived taste and "satisfaction". Systematic and extensive research is carried out by tobacco companies to ensure that cigarette packaging appeals to selected target groups, including young adults and women. Conclusions: Cigarette pack design is an important communication device for cigarette brands and acts as an advertising medium. Many smokers are misled by pack design into thinking that cigarettes may be "safer". There is a need to consider regulation of cigarette packaging. PMID:11893817
Creation of a Geant4 Muon Tomography Package for Imaging of Nuclear Fuel in Dry Cask Storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsoukalas, Lefteri H.
2016-03-01
This is the final report of the NEUP project “Creation of a Geant4 Muon Tomography Package for Imaging of Nuclear Fuel in Dry Cask Storage”, DE-NE0000695. The project started on December 1, 2013 and this report covers the period December 1, 2013 through November 30, 2015. The project was successfully completed and this report provides an overview of the main achievements, results and findings throughout the duration of the project. Additional details can be found in the main body of this report and on the individual Quarterly Reports and associated Deliverables of the project, uploaded in PICS-NE.
DESIGN ANALYSIS FOR THE DEFENSE HIGH-LEVEL WASTE DISPOSAL CONTAINER
DOE Office of Scientific and Technical Information (OSTI.GOV)
G. Radulesscu; J.S. Tang
The purpose of ''Design Analysis for the Defense High-Level Waste Disposal Container'' analysis is to technically define the defense high-level waste (DHLW) disposal container/waste package using the Waste Package Department's (WPD) design methods, as documented in ''Waste Package Design Methodology Report'' (CRWMS M&O [Civilian Radioactive Waste Management System Management and Operating Contractor] 2000a). The DHLW disposal container is intended for disposal of commercial high-level waste (HLW) and DHLW (including immobilized plutonium waste forms), placed within disposable canisters. The U.S. Department of Energy (DOE)-managed spent nuclear fuel (SNF) in disposable canisters may also be placed in a DHLW disposal container alongmore » with HLW forms. The objective of this analysis is to demonstrate that the DHLW disposal container/waste package satisfies the project requirements, as embodied in Defense High Level Waste Disposal Container System Description Document (SDD) (CRWMS M&O 1999a), and additional criteria, as identified in Waste Package Design Sensitivity Report (CRWMS M&Q 2000b, Table 4). The analysis briefly describes the analytical methods appropriate for the design of the DHLW disposal contained waste package, and summarizes the results of the calculations that illustrate the analytical methods. However, the analysis is limited to the calculations selected for the DHLW disposal container in support of the Site Recommendation (SR) (CRWMS M&O 2000b, Section 7). The scope of this analysis is restricted to the design of the codisposal waste package of the Savannah River Site (SRS) DHLW glass canisters and the Training, Research, Isotopes General Atomics (TRIGA) SNF loaded in a short 18-in.-outer diameter (OD) DOE standardized SNF canister. This waste package is representative of the waste packages that consist of the DHLW disposal container, the DHLW/HLW glass canisters, and the DOE-managed SNF in disposable canisters. The intended use of this analysis is to support Site Recommendation reports and to assist in the development of WPD drawings. Activities described in this analysis were conducted in accordance with the Development Plan ''Design Analysis for the Defense High-Level Waste Disposal Container'' (CRWMS M&O 2000c) with no deviations from the plan.« less
Unobtrusive integration of data management with fMRI analysis.
Poliakov, Andrew V; Hertzenberg, Xenia; Moore, Eider B; Corina, David P; Ojemann, George A; Brinkley, James F
2007-01-01
This note describes a software utility, called X-batch which addresses two pressing issues typically faced by functional magnetic resonance imaging (fMRI) neuroimaging laboratories (1) analysis automation and (2) data management. The first issue is addressed by providing a simple batch mode processing tool for the popular SPM software package (http://www.fil.ion. ucl.ac.uk/spm/; Welcome Department of Imaging Neuroscience, London, UK). The second is addressed by transparently recording metadata describing all aspects of the batch job (e.g., subject demographics, analysis parameters, locations and names of created files, date and time of analysis, and so on). These metadata are recorded as instances of an extended version of the Protégé-based Experiment Lab Book ontology created by the Dartmouth fMRI Data Center. The resulting instantiated ontology provides a detailed record of all fMRI analyses performed, and as such can be part of larger systems for neuroimaging data management, sharing, and visualization. The X-batch system is in use in our own fMRI research, and is available for download at http://X-batch.sourceforge.net/.
Network Meta-Analysis Using R: A Review of Currently Available Automated Packages
Neupane, Binod; Richer, Danielle; Bonner, Ashley Joel; Kibret, Taddele; Beyene, Joseph
2014-01-01
Network meta-analysis (NMA) – a statistical technique that allows comparison of multiple treatments in the same meta-analysis simultaneously – has become increasingly popular in the medical literature in recent years. The statistical methodology underpinning this technique and software tools for implementing the methods are evolving. Both commercial and freely available statistical software packages have been developed to facilitate the statistical computations using NMA with varying degrees of functionality and ease of use. This paper aims to introduce the reader to three R packages, namely, gemtc, pcnetmeta, and netmeta, which are freely available software tools implemented in R. Each automates the process of performing NMA so that users can perform the analysis with minimal computational effort. We present, compare and contrast the availability and functionality of different important features of NMA in these three packages so that clinical investigators and researchers can determine which R packages to implement depending on their analysis needs. Four summary tables detailing (i) data input and network plotting, (ii) modeling options, (iii) assumption checking and diagnostic testing, and (iv) inference and reporting tools, are provided, along with an analysis of a previously published dataset to illustrate the outputs available from each package. We demonstrate that each of the three packages provides a useful set of tools, and combined provide users with nearly all functionality that might be desired when conducting a NMA. PMID:25541687
Network meta-analysis using R: a review of currently available automated packages.
Neupane, Binod; Richer, Danielle; Bonner, Ashley Joel; Kibret, Taddele; Beyene, Joseph
2014-01-01
Network meta-analysis (NMA)--a statistical technique that allows comparison of multiple treatments in the same meta-analysis simultaneously--has become increasingly popular in the medical literature in recent years. The statistical methodology underpinning this technique and software tools for implementing the methods are evolving. Both commercial and freely available statistical software packages have been developed to facilitate the statistical computations using NMA with varying degrees of functionality and ease of use. This paper aims to introduce the reader to three R packages, namely, gemtc, pcnetmeta, and netmeta, which are freely available software tools implemented in R. Each automates the process of performing NMA so that users can perform the analysis with minimal computational effort. We present, compare and contrast the availability and functionality of different important features of NMA in these three packages so that clinical investigators and researchers can determine which R packages to implement depending on their analysis needs. Four summary tables detailing (i) data input and network plotting, (ii) modeling options, (iii) assumption checking and diagnostic testing, and (iv) inference and reporting tools, are provided, along with an analysis of a previously published dataset to illustrate the outputs available from each package. We demonstrate that each of the three packages provides a useful set of tools, and combined provide users with nearly all functionality that might be desired when conducting a NMA.
A multiparametric assay for quantitative nerve regeneration evaluation.
Weyn, B; van Remoortere, M; Nuydens, R; Meert, T; van de Wouwer, G
2005-08-01
We introduce an assay for the semi-automated quantification of nerve regeneration by image analysis. Digital images of histological sections of regenerated nerves are recorded using an automated inverted microscope and merged into high-resolution mosaic images representing the entire nerve. These are analysed by a dedicated image-processing package that computes nerve-specific features (e.g. nerve area, fibre count, myelinated area) and fibre-specific features (area, perimeter, myelin sheet thickness). The assay's performance and correlation of the automatically computed data with visually obtained data are determined on a set of 140 semithin sections from the distal part of a rat tibial nerve from four different experimental treatment groups (control, sham, sutured, cut) taken at seven different time points after surgery. Results show a high correlation between the manually and automatically derived data, and a high discriminative power towards treatment. Extra value is added by the large feature set. In conclusion, the assay is fast and offers data that currently can be obtained only by a combination of laborious and time-consuming tests.
Environmental assessment of packaging: Sense and sensibility
NASA Astrophysics Data System (ADS)
Kooijman, Jan M.
1993-09-01
The functions of packaging are derived from product requirements, thus for insight into the environmental effects of packaging the actual combination of product and package has to be evaluated along the production and distribution system. This extension to all related environmental aspects adds realism to the environmental analysis and provides guidance for design while preventing a too detailed investigation of parts of the production system. This approach is contrary to current environmental studies where packaging is always treated as an independent object, neglecting the more important environmental effects of the product that are influenced by packaging. The general analysis and quantification stages for this approach are described, and the currently available methods for the assessment of environmental effects are reviewed. To limit the workload involved in an environmental assessment, a step-by-step analysis and the use of feedback is recommended. First the dominant environmental effects of a particular product and its production and distribution are estimated. Then, on the basis of these preliminary results, the appropriate system boundaries are chosen and the need for further or more detailed environmental analysis is determined. For typical food and drink applications, the effect of different system boundaries on the outcome of environmental assessments and the advantage of the step-by-step analysis of the food supply system is shown. It appears that, depending on the consumer group, different advice for reduction of environmental effects has to be given. Furthermore, because of interrelated environmental effects of the food supply system, the continuing quest for more detailed and accurate analysis of the package components is not necessary for improved management of the environmental effects of packaging.
Report for 2012 from the Bordeaux IVS Analysis Center
NASA Technical Reports Server (NTRS)
Charlot, Patrick; Bellanger, Antoine; Bouffet, Romuald; Bourda, Geraldine; Collioud, Arnaud; Baudry, Alain
2013-01-01
This report summarizes the activities of the Bordeaux IVS Analysis Center during the year 2012. The work focused on (i) regular analysis of the IVS-R1 and IVS-R4 sessions with the GINS software package; (ii) systematic VLBI imaging of the RDV sessions and calculation of the corresponding source structure index and compactness values; (iii) investigation of the correlation between astrometric position instabilities and source structure variations; and (iv) continuation of our VLBI observational program to identify optically-bright radio sources suitable for the link with the future Gaia frame. Also of importance is the 11th European VLBI Network Symposium, which we organized last October in Bordeaux and which drew much attention from the European and International VLBI communities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gates, A.A.; McCarthy, P.G.; Edl, J.W.
1975-05-01
Elemental tritium is shipped at low pressure in a stainless steel container (LP-50) surrounded by an aluminum vessel and Celotex insulation at least 4 in. thick in a steel drum. Each package contains a large quantity (greater than a Type A quantity) of nonfissile material, as defined in AECM 0529. This report provides the details of the safety analysis performed for this type container.
Comparison of requirements and capabilities of major multipurpose software packages.
Igo, Robert P; Schnell, Audrey H
2012-01-01
The aim of this chapter is to introduce the reader to commonly used software packages and illustrate their input requirements, analysis options, strengths, and limitations. We focus on packages that perform more than one function and include a program for quality control, linkage, and association analyses. Additional inclusion criteria were (1) programs that are free to academic users and (2) currently supported, maintained, and developed. Using those criteria, we chose to review three programs: Statistical Analysis for Genetic Epidemiology (S.A.G.E.), PLINK, and Merlin. We will describe the required input format and analysis options. We will not go into detail about every possible program in the packages, but we will give an overview of the packages requirements and capabilities.
NASA Astrophysics Data System (ADS)
Ramesham, Rajeshuni
2012-03-01
Ceramic column grid array (CCGA) packages have been increasing in use based on their advantages such as high interconnect density, very good thermal and electrical performances, compatibility with standard surfacemount packaging assembly processes, and so on. CCGA packages are used in space applications such as in logic and microprocessor functions, telecommunications, payload electronics, and flight avionics. As these packages tend to have less solder joint strain relief than leaded packages or more strain relief over lead-less chip carrier packages, the reliability of CCGA packages is very important for short-term and long-term deep space missions. We have employed high density CCGA 1152 and 1272 daisy chained electronic packages in this preliminary reliability study. Each package is divided into several daisy-chained sections. The physical dimensions of CCGA1152 package is 35 mm x 35 mm with a 34 x 34 array of columns with a 1 mm pitch. The dimension of the CCGA1272 package is 37.5 mm x 37.5 mm with a 36 x 36 array with a 1 mm pitch. The columns are made up of 80% Pb/20%Sn material. CCGA interconnect electronic package printed wiring polyimide boards have been assembled and inspected using non-destructive x-ray imaging techniques. The assembled CCGA boards were subjected to extreme temperature thermal atmospheric cycling to assess their reliability for future deep space missions. The resistance of daisy-chained interconnect sections were monitored continuously during thermal cycling. This paper provides the experimental test results of advanced CCGA packages tested in extreme temperature thermal environments. Standard optical inspection and x-ray non-destructive inspection tools were used to assess the reliability of high density CCGA packages for deep space extreme temperature missions.
Zhang, Jing; Liang, Lichen; Anderson, Jon R; Gatewood, Lael; Rottenberg, David A; Strother, Stephen C
2008-01-01
As functional magnetic resonance imaging (fMRI) becomes widely used, the demands for evaluation of fMRI processing pipelines and validation of fMRI analysis results is increasing rapidly. The current NPAIRS package, an IDL-based fMRI processing pipeline evaluation framework, lacks system interoperability and the ability to evaluate general linear model (GLM)-based pipelines using prediction metrics. Thus, it can not fully evaluate fMRI analytical software modules such as FSL.FEAT and NPAIRS.GLM. In order to overcome these limitations, a Java-based fMRI processing pipeline evaluation system was developed. It integrated YALE (a machine learning environment) into Fiswidgets (a fMRI software environment) to obtain system interoperability and applied an algorithm to measure GLM prediction accuracy. The results demonstrated that the system can evaluate fMRI processing pipelines with univariate GLM and multivariate canonical variates analysis (CVA)-based models on real fMRI data based on prediction accuracy (classification accuracy) and statistical parametric image (SPI) reproducibility. In addition, a preliminary study was performed where four fMRI processing pipelines with GLM and CVA modules such as FSL.FEAT and NPAIRS.CVA were evaluated with the system. The results indicated that (1) the system can compare different fMRI processing pipelines with heterogeneous models (NPAIRS.GLM, NPAIRS.CVA and FSL.FEAT) and rank their performance by automatic performance scoring, and (2) the rank of pipeline performance is highly dependent on the preprocessing operations. These results suggest that the system will be of value for the comparison, validation, standardization and optimization of functional neuroimaging software packages and fMRI processing pipelines.
Paulmurugan, Ramasamy; Bhethanabotla, Rohith; Mishra, Kaushik; Devulapally, Rammohan; Foygel, Kira; Sekar, Thillai V; Ananta, Jeyarama S; Massoud, Tarik F; Joy, Abraham
2015-01-01
Triple negative breast cancer (TNBC) is a recalcitrant malignancy with no available targeted therapy. Off target effects and poor bioavailability of the FDA approved anti-obesity drug orlistat hinder its clinical translation as a repurposed new drug against TNBC. Here we demonstrate a newly engineered drug formulation for packaging orlistat tailored to TNBC treatment. We synthesized TNBC-specific folate receptor targeted micellar nanoparticles (NPs) carrying orlistat, which improved the solubility (70-80 μg/ml) of this water insoluble drug. The targeted NPs also improved the delivery and bioavailability of orlistat to MDA-MB-231 cells in culture and to tumor xenografts in nude mouse model. We prepared HEA-EHA copolymer micellar NPs by copolymerization of 2-hydroxyethylacrylate (HEA) and 2-ethylhexylacrylate (EHA), and functionalized them with folic acid and an imaging dye. Fluorescence activated cell sorting (FACS) analysis of TNBC cells indicated a dose dependent increase in apoptotic populations in cells treated with free orlistat, orlistat NPs, and folate-receptor targeted Fol-HEA-EHA-orlistat NPs in which Fol-HEA-EHA-orlistat NPs showed significantly higher cytotoxicity than free orlistat. In vitro analysis data demonstrated significant apoptosis at nanomolar concentrations in cells activated through caspase 3 and PARP inhibition. In vivo analysis demonstrated significant antitumor effects in living mice after targeted treatment of tumors, and confirmed by fluorescence imaging. Moreover, Folate receptor targeted Fol-DyLight747-orlistat NPs treated mice exhibited significantly higher reduction in tumor volume compared to control group. Taken together, these results indicate that orlistat packaged in HEA-b-EHA micellar NPs is a highly promising new drug formulation for TNBC therapy. PMID:26553061
Mercury⊕: An evidential reasoning image classifier
NASA Astrophysics Data System (ADS)
Peddle, Derek R.
1995-12-01
MERCURY⊕ is a multisource evidential reasoning classification software system based on the Dempster-Shafer theory of evidence. The design and implementation of this software package is described for improving the classification and analysis of multisource digital image data necessary for addressing advanced environmental and geoscience applications. In the remote-sensing context, the approach provides a more appropriate framework for classifying modern, multisource, and ancillary data sets which may contain a large number of disparate variables with different statistical properties, scales of measurement, and levels of error which cannot be handled using conventional Bayesian approaches. The software uses a nonparametric, supervised approach to classification, and provides a more objective and flexible interface to the evidential reasoning framework using a frequency-based method for computing support values from training data. The MERCURY⊕ software package has been implemented efficiently in the C programming language, with extensive use made of dynamic memory allocation procedures and compound linked list and hash-table data structures to optimize the storage and retrieval of evidence in a Knowledge Look-up Table. The software is complete with a full user interface and runs under Unix, Ultrix, VAX/VMS, MS-DOS, and Apple Macintosh operating system. An example of classifying alpine land cover and permafrost active layer depth in northern Canada is presented to illustrate the use and application of these ideas.
Image retrieval and processing system version 2.0 development work
NASA Technical Reports Server (NTRS)
Slavney, Susan H.; Guinness, Edward A.
1991-01-01
The Image Retrieval and Processing System (IRPS) is a software package developed at Washington University and used by the NASA Regional Planetary Image Facilities (RPIF's). The IRPS combines data base management and image processing components to allow the user to examine catalogs of image data, locate the data of interest, and perform radiometric and geometric calibration of the data in preparation for analysis. Version 1.0 of IRPS was completed in Aug. 1989 and was installed at several IRPS's. Other RPIF's use remote logins via NASA Science Internet to access IRPS at Washington University. Work was begun on designing and population a catalog of Magellan image products that will be part of IRPS Version 2.0, planned for release by the end of calendar year 1991. With this catalog, a user will be able to search by orbit and by location for Magellan Basic Image Data Records (BIDR's), Mosaicked Image Data Records (MIDR's), and Altimetry-Radiometry Composite Data Records (ARCDR's). The catalog will include the Magellan CD-ROM volume, director, and file name for each data product. The image processing component of IRPS is based on the Planetary Image Cartography Software (PICS) developed by the U.S. Geological Survey, Flagstaff, Arizona. To augment PICS capabilities, a set of image processing programs were developed that are compatible with PICS-format images. This software includes general-purpose functions that PICS does not have, analysis and utility programs for specific data sets, and programs from other sources that were modified to work with PICS images. Some of the software will be integrated into the Version 2.0 release of IRPS. A table is presented that lists the programs with a brief functional description of each.
DARPA/ISTO Rapid VLSI Implementation
1991-12-01
temperature tigation. Motorola MCI00E111, very fast 1:9 clock buffers. were procured to drive high - speed waveforrms onto the substrate clock distribution...The hot image is normalized to a rootn- temperature image, which removes all optical anomalies and leaves a high -resolution thermal image. 69 j APT...9 High -density DRAM ..................... 9 Aquarius MI Packaging Study ........................ ....... 10 NUT Alewife
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-24
... such a demonstration by the importer will information, images, or samples be shared with the right... (see Sec. 133.21(b)(1) of this rule). This disclosure of information, which includes images... release to right holders information appearing on goods (and/or their retail packaging), and on images and...
Structured Forms Reference Set of Binary Images (SFRS)
National Institute of Standards and Technology Data Gateway
NIST Structured Forms Reference Set of Binary Images (SFRS) (Web, free access) The NIST Structured Forms Database (Special Database 2) consists of 5,590 pages of binary, black-and-white images of synthesized documents. The documents in this database are 12 different tax forms from the IRS 1040 Package X for the year 1988.
Recent developments in intelligent packaging for enhancing food quality and safety.
Sohail, Muhammad; Sun, Da-Wen; Zhu, Zhiwei
2018-03-07
The role of packaging cannot be denied in the life cycle of any food product. Intelligent packaging is an emerging technology in the food packaging sector. Although it still needs its full emergence in the market, its importance has been proved for the maintenance of food quality and safety. The present review describes several aspects of intelligent packaging. It first highlights different tools used in intelligent packaging and elucidates the role of these packaging devices for maintaining the quality of different food items in terms of controlling microbial growth and gas concentration, and for providing convenience and easiness to its users in the form of time temperature indication. This review also discusses other intelligent packaging solutions in supply chain management of food products to control theft and counterfeiting conducts and broaden the image of the food companies in terms of branding and marketing. Overall, intelligent packaging can ensure food quality and safety in the food industry, however there are still some concerns over this emerging technology including high cost and legal aspects, and thus future work should be performed to overcome these problems for further promoting its applications in the food industry. Moreover, work should also be carried out to combine several single intelligent packaging devices into a single one, so that most of the benefits from this emerging technology can be achieved.
Joosen, Ronny V L; Kodde, Jan; Willems, Leo A J; Ligterink, Wilco; van der Plas, Linus H W; Hilhorst, Henk W M
2010-04-01
Over the past few decades seed physiology research has contributed to many important scientific discoveries and has provided valuable tools for the production of high quality seeds. An important instrument for this type of research is the accurate quantification of germination; however gathering cumulative germination data is a very laborious task that is often prohibitive to the execution of large experiments. In this paper we present the germinator package: a simple, highly cost-efficient and flexible procedure for high-throughput automatic scoring and evaluation of germination that can be implemented without the use of complex robotics. The germinator package contains three modules: (i) design of experimental setup with various options to replicate and randomize samples; (ii) automatic scoring of germination based on the color contrast between the protruding radicle and seed coat on a single image; and (iii) curve fitting of cumulative germination data and the extraction, recap and visualization of the various germination parameters. The curve-fitting module enables analysis of general cumulative germination data and can be used for all plant species. We show that the automatic scoring system works for Arabidopsis thaliana and Brassica spp. seeds, but is likely to be applicable to other species, as well. In this paper we show the accuracy, reproducibility and flexibility of the germinator package. We have successfully applied it to evaluate natural variation for salt tolerance in a large population of recombinant inbred lines and were able to identify several quantitative trait loci for salt tolerance. Germinator is a low-cost package that allows the monitoring of several thousands of germination tests, several times a day by a single person.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Astaf'ev, S. B., E-mail: bard@ns.crys.ras.ru; Shchedrin, B. M.; Yanusova, L. G.
2012-01-15
The main principles of developing the Basic Analysis of Reflectometry Data (BARD) software package, which is aimed at obtaining a unified (standardized) tool for analyzing the structure of thin multilayer films and nanostructures of different nature based on reflectometry data, are considered. This software package contains both traditionally used procedures for processing reflectometry data and the authors' original developments on the basis of new methods for carrying out and analyzing reflectometry experiments. The structure of the package, its functional possibilities, examples of application, and prospects of development are reviewed.
Markiewicz, Pawel J; Ehrhardt, Matthias J; Erlandsson, Kjell; Noonan, Philip J; Barnes, Anna; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Ourselin, Sebastien
2018-01-01
We present a standalone, scalable and high-throughput software platform for PET image reconstruction and analysis. We focus on high fidelity modelling of the acquisition processes to provide high accuracy and precision quantitative imaging, especially for large axial field of view scanners. All the core routines are implemented using parallel computing available from within the Python package NiftyPET, enabling easy access, manipulation and visualisation of data at any processing stage. The pipeline of the platform starts from MR and raw PET input data and is divided into the following processing stages: (1) list-mode data processing; (2) accurate attenuation coefficient map generation; (3) detector normalisation; (4) exact forward and back projection between sinogram and image space; (5) estimation of reduced-variance random events; (6) high accuracy fully 3D estimation of scatter events; (7) voxel-based partial volume correction; (8) region- and voxel-level image analysis. We demonstrate the advantages of this platform using an amyloid brain scan where all the processing is executed from a single and uniform computational environment in Python. The high accuracy acquisition modelling is achieved through span-1 (no axial compression) ray tracing for true, random and scatter events. Furthermore, the platform offers uncertainty estimation of any image derived statistic to facilitate robust tracking of subtle physiological changes in longitudinal studies. The platform also supports the development of new reconstruction and analysis algorithms through restricting the axial field of view to any set of rings covering a region of interest and thus performing fully 3D reconstruction and corrections using real data significantly faster. All the software is available as open source with the accompanying wiki-page and test data.
Long, Zaiyang; Tradup, Donald J; Stekel, Scott F; Gorny, Krzysztof R; Hangiandreou, Nicholas J
2018-03-01
We evaluated a commercially available software package that uses B-mode images to semi-automatically measure quantitative metrics of ultrasound image quality, such as contrast response, depth of penetration (DOP), and spatial resolution (lateral, axial, and elevational). Since measurement of elevational resolution is not a part of the software package, we achieved it by acquiring phantom images with transducers tilted at 45 degrees relative to the phantom. Each measurement was assessed in terms of measurement stability, sensitivity, repeatability, and semi-automated measurement success rate. All assessments were performed on a GE Logiq E9 ultrasound system with linear (9L or 11L), curved (C1-5), and sector (S1-5) transducers, using a CIRS model 040GSE phantom. In stability tests, the measurements of contrast, DOP, and spatial resolution remained within a ±10% variation threshold in 90%, 100%, and 69% of cases, respectively. In sensitivity tests, contrast, DOP, and spatial resolution measurements followed the expected behavior in 100%, 100%, and 72% of cases, respectively. In repeatability testing, intra- and inter-individual coefficients of variations were equal to or less than 3.2%, 1.3%, and 4.4% for contrast, DOP, and spatial resolution (lateral and axial), respectively. The coefficients of variation corresponding to the elevational resolution test were all within 9.5%. Overall, in our assessment, the evaluated package performed well for objective and quantitative assessment of the above-mentioned image qualities under well-controlled acquisition conditions. We are finding it to be useful for various clinical ultrasound applications including performance comparison between scanners from different vendors. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Courtney, Jane; Woods, Elena; Scholz, Dimitri; Hall, William W; Gautier, Virginie W
2015-01-01
We introduce here MATtrack, an open source MATLAB-based computational platform developed to process multi-Tiff files produced by a photo-conversion time lapse protocol for live cell fluorescent microscopy. MATtrack automatically performs a series of steps required for image processing, including extraction and import of numerical values from Multi-Tiff files, red/green image classification using gating parameters, noise filtering, background extraction, contrast stretching and temporal smoothing. MATtrack also integrates a series of algorithms for quantitative image analysis enabling the construction of mean and standard deviation images, clustering and classification of subcellular regions and injection point approximation. In addition, MATtrack features a simple user interface, which enables monitoring of Fluorescent Signal Intensity in multiple Regions of Interest, over time. The latter encapsulates a region growing method to automatically delineate the contours of Regions of Interest selected by the user, and performs background and regional Average Fluorescence Tracking, and automatic plotting. Finally, MATtrack computes convenient visualization and exploration tools including a migration map, which provides an overview of the protein intracellular trajectories and accumulation areas. In conclusion, MATtrack is an open source MATLAB-based software package tailored to facilitate the analysis and visualization of large data files derived from real-time live cell fluorescent microscopy using photoconvertible proteins. It is flexible, user friendly, compatible with Windows, Mac, and Linux, and a wide range of data acquisition software. MATtrack is freely available for download at eleceng.dit.ie/courtney/MATtrack.zip.
Courtney, Jane; Woods, Elena; Scholz, Dimitri; Hall, William W.; Gautier, Virginie W.
2015-01-01
We introduce here MATtrack, an open source MATLAB-based computational platform developed to process multi-Tiff files produced by a photo-conversion time lapse protocol for live cell fluorescent microscopy. MATtrack automatically performs a series of steps required for image processing, including extraction and import of numerical values from Multi-Tiff files, red/green image classification using gating parameters, noise filtering, background extraction, contrast stretching and temporal smoothing. MATtrack also integrates a series of algorithms for quantitative image analysis enabling the construction of mean and standard deviation images, clustering and classification of subcellular regions and injection point approximation. In addition, MATtrack features a simple user interface, which enables monitoring of Fluorescent Signal Intensity in multiple Regions of Interest, over time. The latter encapsulates a region growing method to automatically delineate the contours of Regions of Interest selected by the user, and performs background and regional Average Fluorescence Tracking, and automatic plotting. Finally, MATtrack computes convenient visualization and exploration tools including a migration map, which provides an overview of the protein intracellular trajectories and accumulation areas. In conclusion, MATtrack is an open source MATLAB-based software package tailored to facilitate the analysis and visualization of large data files derived from real-time live cell fluorescent microscopy using photoconvertible proteins. It is flexible, user friendly, compatible with Windows, Mac, and Linux, and a wide range of data acquisition software. MATtrack is freely available for download at eleceng.dit.ie/courtney/MATtrack.zip. PMID:26485569
gr-MRI: A software package for magnetic resonance imaging using software defined radios.
Hasselwander, Christopher J; Cao, Zhipeng; Grissom, William A
2016-09-01
The goal of this work is to develop software that enables the rapid implementation of custom MRI spectrometers using commercially-available software defined radios (SDRs). The developed gr-MRI software package comprises a set of Python scripts, flowgraphs, and signal generation and recording blocks for GNU Radio, an open-source SDR software package that is widely used in communications research. gr-MRI implements basic event sequencing functionality, and tools for system calibrations, multi-radio synchronization, and MR signal processing and image reconstruction. It includes four pulse sequences: a single-pulse sequence to record free induction signals, a gradient-recalled echo imaging sequence, a spin echo imaging sequence, and an inversion recovery spin echo imaging sequence. The sequences were used to perform phantom imaging scans with a 0.5Tesla tabletop MRI scanner and two commercially-available SDRs. One SDR was used for RF excitation and reception, and the other for gradient pulse generation. The total SDR hardware cost was approximately $2000. The frequency of radio desynchronization events and the frequency with which the software recovered from those events was also measured, and the SDR's ability to generate frequency-swept RF waveforms was validated and compared to the scanner's commercial spectrometer. The spin echo images geometrically matched those acquired using the commercial spectrometer, with no unexpected distortions. Desynchronization events were more likely to occur at the very beginning of an imaging scan, but were nearly eliminated if the user invoked the sequence for a short period before beginning data recording. The SDR produced a 500kHz bandwidth frequency-swept pulse with high fidelity, while the commercial spectrometer produced a waveform with large frequency spike errors. In conclusion, the developed gr-MRI software can be used to develop high-fidelity, low-cost custom MRI spectrometers using commercially-available SDRs. Copyright © 2016. Published by Elsevier Inc.
PharmacoGx: an R package for analysis of large pharmacogenomic datasets.
Smirnov, Petr; Safikhani, Zhaleh; El-Hachem, Nehme; Wang, Dong; She, Adrian; Olsen, Catharina; Freeman, Mark; Selby, Heather; Gendoo, Deena M A; Grossmann, Patrick; Beck, Andrew H; Aerts, Hugo J W L; Lupien, Mathieu; Goldenberg, Anna; Haibe-Kains, Benjamin
2016-04-15
Pharmacogenomics holds great promise for the development of biomarkers of drug response and the design of new therapeutic options, which are key challenges in precision medicine. However, such data are scattered and lack standards for efficient access and analysis, consequently preventing the realization of the full potential of pharmacogenomics. To address these issues, we implemented PharmacoGx, an easy-to-use, open source package for integrative analysis of multiple pharmacogenomic datasets. We demonstrate the utility of our package in comparing large drug sensitivity datasets, such as the Genomics of Drug Sensitivity in Cancer and the Cancer Cell Line Encyclopedia. Moreover, we show how to use our package to easily perform Connectivity Map analysis. With increasing availability of drug-related data, our package will open new avenues of research for meta-analysis of pharmacogenomic data. PharmacoGx is implemented in R and can be easily installed on any system. The package is available from CRAN and its source code is available from GitHub. bhaibeka@uhnresearch.ca or benjamin.haibe.kains@utoronto.ca Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Development of a CCD based solar speckle imaging system
NASA Astrophysics Data System (ADS)
Nisenson, Peter; Stachnik, Robert V.; Noyes, Robert W.
1986-02-01
A program to develop software and hardware for the purpose of obtaining high angular resolution images of the solar surface is described. The program included the procurement of a Charge Coupled Devices imaging system; an extensive laboratory and remote site testing of the camera system; the development of a software package for speckle image reconstruction which was eventually installed and tested at the Sacramento Peak Observatory; and experiments of the CCD system (coupled to an image intensifier) for low light level, narrow spectral band solar imaging.
Haider, Kamran; Cruz, Anthony; Ramsey, Steven; Gilson, Michael K; Kurtzman, Tom
2018-01-09
We have developed SSTMap, a software package for mapping structural and thermodynamic water properties in molecular dynamics trajectories. The package introduces automated analysis and mapping of local measures of frustration and enhancement of water structure. The thermodynamic calculations are based on Inhomogeneous Fluid Solvation Theory (IST), which is implemented using both site-based and grid-based approaches. The package also extends the applicability of solvation analysis calculations to multiple molecular dynamics (MD) simulation programs by using existing cross-platform tools for parsing MD parameter and trajectory files. SSTMap is implemented in Python and contains both command-line tools and a Python module to facilitate flexibility in setting up calculations and for automated generation of large data sets involving analysis of multiple solutes. Output is generated in formats compatible with popular Python data science packages. This tool will be used by the molecular modeling community for computational analysis of water in problems of biophysical interest such as ligand binding and protein function.
Ahn, Woo-Young; Haines, Nathaniel; Zhang, Lei
2017-01-01
Reinforcement learning and decision-making (RLDM) provide a quantitative framework and computational theories with which we can disentangle psychiatric conditions into the basic dimensions of neurocognitive functioning. RLDM offer a novel approach to assessing and potentially diagnosing psychiatric patients, and there is growing enthusiasm for both RLDM and computational psychiatry among clinical researchers. Such a framework can also provide insights into the brain substrates of particular RLDM processes, as exemplified by model-based analysis of data from functional magnetic resonance imaging (fMRI) or electroencephalography (EEG). However, researchers often find the approach too technical and have difficulty adopting it for their research. Thus, a critical need remains to develop a user-friendly tool for the wide dissemination of computational psychiatric methods. We introduce an R package called hBayesDM (hierarchical Bayesian modeling of Decision-Making tasks), which offers computational modeling of an array of RLDM tasks and social exchange games. The hBayesDM package offers state-of-the-art hierarchical Bayesian modeling, in which both individual and group parameters (i.e., posterior distributions) are estimated simultaneously in a mutually constraining fashion. At the same time, the package is extremely user-friendly: users can perform computational modeling, output visualization, and Bayesian model comparisons, each with a single line of coding. Users can also extract the trial-by-trial latent variables (e.g., prediction errors) required for model-based fMRI/EEG. With the hBayesDM package, we anticipate that anyone with minimal knowledge of programming can take advantage of cutting-edge computational-modeling approaches to investigate the underlying processes of and interactions between multiple decision-making (e.g., goal-directed, habitual, and Pavlovian) systems. In this way, we expect that the hBayesDM package will contribute to the dissemination of advanced modeling approaches and enable a wide range of researchers to easily perform computational psychiatric research within different populations. PMID:29601060
Trimarchi, Matteo; Lund, Valerie J; Nicolai, Piero; Pini, Massimiliano; Senna, Massimo; Howard, David J
2004-04-01
The Neoplasms of the Sinonasal Tract software package (NSNT v 1.0) implements a complete visual database for patients with sinonasal neoplasia, facilitating standardization of data and statistical analysis. The software, which is compatible with the Macintosh and Windows platforms, provides multiuser application with a dedicated server (on Windows NT or 2000 or Macintosh OS 9 or X and a network of clients) together with web access, if required. The system hardware consists of an Apple Power Macintosh G4500 MHz computer with PCI bus, 256 Mb of RAM plus 60 Gb hard disk, or any IBM-compatible computer with a Pentium 2 processor. Image acquisition may be performed with different frame-grabber cards for analog or digital video input of different standards (PAL, SECAM, or NTSC) and levels of quality (VHS, S-VHS, Betacam, Mini DV, DV). The visual database is based on 4th Dimension by 4D Inc, and video compression is made in real-time MPEG format. Six sections have been developed: demographics, symptoms, extent of disease, radiology, treatment, and follow-up. Acquisition of data includes computed tomography and magnetic resonance imaging, histology, and endoscopy images, allowing sequential comparison. Statistical analysis integral to the program provides Kaplan-Meier survival curves. The development of a dedicated, user-friendly database for sinonasal neoplasia facilitates a multicenter network and has obvious clinical and research benefits.
The Java Image Science Toolkit (JIST) for rapid prototyping and publishing of neuroimaging software.
Lucas, Blake C; Bogovic, John A; Carass, Aaron; Bazin, Pierre-Louis; Prince, Jerry L; Pham, Dzung L; Landman, Bennett A
2010-03-01
Non-invasive neuroimaging techniques enable extraordinarily sensitive and specific in vivo study of the structure, functional response and connectivity of biological mechanisms. With these advanced methods comes a heavy reliance on computer-based processing, analysis and interpretation. While the neuroimaging community has produced many excellent academic and commercial tool packages, new tools are often required to interpret new modalities and paradigms. Developing custom tools and ensuring interoperability with existing tools is a significant hurdle. To address these limitations, we present a new framework for algorithm development that implicitly ensures tool interoperability, generates graphical user interfaces, provides advanced batch processing tools, and, most importantly, requires minimal additional programming or computational overhead. Java-based rapid prototyping with this system is an efficient and practical approach to evaluate new algorithms since the proposed system ensures that rapidly constructed prototypes are actually fully-functional processing modules with support for multiple GUI's, a broad range of file formats, and distributed computation. Herein, we demonstrate MRI image processing with the proposed system for cortical surface extraction in large cross-sectional cohorts, provide a system for fully automated diffusion tensor image analysis, and illustrate how the system can be used as a simulation framework for the development of a new image analysis method. The system is released as open source under the Lesser GNU Public License (LGPL) through the Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC).
The Java Image Science Toolkit (JIST) for Rapid Prototyping and Publishing of Neuroimaging Software
Lucas, Blake C.; Bogovic, John A.; Carass, Aaron; Bazin, Pierre-Louis; Prince, Jerry L.; Pham, Dzung
2010-01-01
Non-invasive neuroimaging techniques enable extraordinarily sensitive and specific in vivo study of the structure, functional response and connectivity of biological mechanisms. With these advanced methods comes a heavy reliance on computer-based processing, analysis and interpretation. While the neuroimaging community has produced many excellent academic and commercial tool packages, new tools are often required to interpret new modalities and paradigms. Developing custom tools and ensuring interoperability with existing tools is a significant hurdle. To address these limitations, we present a new framework for algorithm development that implicitly ensures tool interoperability, generates graphical user interfaces, provides advanced batch processing tools, and, most importantly, requires minimal additional programming or computational overhead. Java-based rapid prototyping with this system is an efficient and practical approach to evaluate new algorithms since the proposed system ensures that rapidly constructed prototypes are actually fully-functional processing modules with support for multiple GUI's, a broad range of file formats, and distributed computation. Herein, we demonstrate MRI image processing with the proposed system for cortical surface extraction in large cross-sectional cohorts, provide a system for fully automated diffusion tensor image analysis, and illustrate how the system can be used as a simulation framework for the development of a new image analysis method. The system is released as open source under the Lesser GNU Public License (LGPL) through the Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC). PMID:20077162
Cenozoic Antarctic DiatomWare/BugCam: An aid for research and teaching
Wise, S.W.; Olney, M.; Covington, J.M.; Egerton, V.M.; Jiang, S.; Ramdeen, D.K.; ,; Schrader, H.; Sims, P.A.; Wood, A.S.; Davis, A.; Davenport, D.R.; Doepler, N.; Falcon, W.; Lopez, C.; Pressley, T.; Swedberg, O.L.; Harwood, D.M.
2007-01-01
Cenozoic Antarctic DiatomWare/BugCam© is an interactive, icon-driven digital-image database/software package that displays over 500 illustrated Cenozoic Antarctic diatom taxa along with original descriptions (including over 100 generic and 20 family-group descriptions). This digital catalog is designed primarily for use by micropaleontologists working in the field (at sea or on the Antarctic continent) where hard-copy literature resources are limited. This new package will also be useful for classroom/lab teaching as well as for any paleontologists making or refining taxonomic identifications at the microscope. The database (Cenozoic Antarctic DiatomWare) is displayed via a custom software program (BugCam) written in Visual Basic for use on PCs running Windows 95 or later operating systems. BugCam is a flexible image display program that utilizes an intuitive thumbnail “tree” structure for navigation through the database. The data are stored on Micrsosoft EXCEL spread sheets, hence no separate relational database program is necessary to run the package
NASA Astrophysics Data System (ADS)
Duffy, Alan; Yates, Brian; Takacs, Peter
2012-09-01
The Optical Metrology Facility at the Canadian Light Source (CLS) has recently purchased MountainsMap surface analysis software from Digital Surf and we report here our experiences with this package and its usefulness as a tool for examining metrology data of synchrotron x-ray mirrors. The package has a number of operators that are useful for determining surface roughness and slope error including compliance with ISO standards (viz. ISO 4287 and ISO 25178). The software is extensible with MATLAB scripts either by loading an m-file or by a user written script. This makes it possible to apply a custom operator to measurement data sets. Using this feature we have applied the simple six-line MATLAB code for the direct least square fitting of ellipses developed by Fitzgibbon et. al. to investigate the residual slope error of elliptical mirrors upon the removal of the best-fit-ellipse. The software includes support for many instruments (e.g. Zygo, MicroMap, etc...) and can import ASCII data (e.g. LTP data). The stitching module allows the user to assemble overlapping images and we report on our experiences with this feature applied to MicroMap surface roughness data. The power spectral density function was determined for the stitched and unstitched data and compared.
Jacques, Eveline; Buytaert, Jan; Wells, Darren M; Lewandowski, Michal; Bennett, Malcolm J; Dirckx, Joris; Verbelen, Jean-Pierre; Vissenberg, Kris
2013-06-01
Image acquisition is an important step in the study of cytoskeleton organization. As visual interpretations and manual measurements of digital images are prone to errors and require a great amount of time, a freely available software package named MicroFilament Analyzer (MFA) was developed. The goal was to provide a tool that facilitates high-throughput analysis to determine the orientation of filamentous structures on digital images in a more standardized, objective and repeatable way. Here, the rationale and applicability of the program is demonstrated by analyzing the microtubule patterns in epidermal cells of control and gravi-stimulated Arabidopsis thaliana roots. Differential expansion of cells on either side of the root results in downward bending of the root tip. As cell expansion depends on the properties of the cell wall, this may imply a differential orientation of cellulose microfibrils. As cellulose deposition is orchestrated by cortical microtubules, the microtubule patterns were analyzed. The MFA program detects the filamentous structures on the image and identifies the main orientation(s) within individual cells. This revealed four distinguishable microtubule patterns in root epidermal cells. The analysis indicated that gravitropic stimulation and developmental age are both significant factors that determine microtubule orientation. Moreover, the data show that an altered microtubule pattern does not precede differential expansion. Other possible applications are also illustrated, including field emission scanning electron micrographs of cellulose microfibrils in plant cell walls and images of fluorescent actin. © 2013 The Authors The Plant Journal © 2013 John Wiley & Sons Ltd.
Lin4Neuro: a customized Linux distribution ready for neuroimaging analysis
2011-01-01
Background A variety of neuroimaging software packages have been released from various laboratories worldwide, and many researchers use these packages in combination. Though most of these software packages are freely available, some people find them difficult to install and configure because they are mostly based on UNIX-like operating systems. We developed a live USB-bootable Linux package named "Lin4Neuro." This system includes popular neuroimaging analysis tools. The user interface is customized so that even Windows users can use it intuitively. Results The boot time of this system was only around 40 seconds. We performed a benchmark test of inhomogeneity correction on 10 subjects of three-dimensional T1-weighted MRI scans. The processing speed of USB-booted Lin4Neuro was as fast as that of the package installed on the hard disk drive. We also installed Lin4Neuro on a virtualization software package that emulates the Linux environment on a Windows-based operation system. Although the processing speed was slower than that under other conditions, it remained comparable. Conclusions With Lin4Neuro in one's hand, one can access neuroimaging software packages easily, and immediately focus on analyzing data. Lin4Neuro can be a good primer for beginners of neuroimaging analysis or students who are interested in neuroimaging analysis. It also provides a practical means of sharing analysis environments across sites. PMID:21266047
Lin4Neuro: a customized Linux distribution ready for neuroimaging analysis.
Nemoto, Kiyotaka; Dan, Ippeita; Rorden, Christopher; Ohnishi, Takashi; Tsuzuki, Daisuke; Okamoto, Masako; Yamashita, Fumio; Asada, Takashi
2011-01-25
A variety of neuroimaging software packages have been released from various laboratories worldwide, and many researchers use these packages in combination. Though most of these software packages are freely available, some people find them difficult to install and configure because they are mostly based on UNIX-like operating systems. We developed a live USB-bootable Linux package named "Lin4Neuro." This system includes popular neuroimaging analysis tools. The user interface is customized so that even Windows users can use it intuitively. The boot time of this system was only around 40 seconds. We performed a benchmark test of inhomogeneity correction on 10 subjects of three-dimensional T1-weighted MRI scans. The processing speed of USB-booted Lin4Neuro was as fast as that of the package installed on the hard disk drive. We also installed Lin4Neuro on a virtualization software package that emulates the Linux environment on a Windows-based operation system. Although the processing speed was slower than that under other conditions, it remained comparable. With Lin4Neuro in one's hand, one can access neuroimaging software packages easily, and immediately focus on analyzing data. Lin4Neuro can be a good primer for beginners of neuroimaging analysis or students who are interested in neuroimaging analysis. It also provides a practical means of sharing analysis environments across sites.
A software package for evaluating the performance of a star sensor operation
NASA Astrophysics Data System (ADS)
Sarpotdar, Mayuresh; Mathew, Joice; Sreejith, A. G.; Nirmal, K.; Ambily, S.; Prakash, Ajin; Safonova, Margarita; Murthy, Jayant
2017-02-01
We have developed a low-cost off-the-shelf component star sensor ( StarSense) for use in minisatellites and CubeSats to determine the attitude of a satellite in orbit. StarSense is an imaging camera with a limiting magnitude of 6.5, which extracts information from star patterns it records in the images. The star sensor implements a centroiding algorithm to find centroids of the stars in the image, a Geometric Voting algorithm for star pattern identification, and a QUEST algorithm for attitude quaternion calculation. Here, we describe the software package to evaluate the performance of these algorithms as a star sensor single operating system. We simulate the ideal case where sky background and instrument errors are omitted, and a more realistic case where noise and camera parameters are added to the simulated images. We evaluate such performance parameters of the algorithms as attitude accuracy, calculation time, required memory, star catalog size, sky coverage, etc., and estimate the errors introduced by each algorithm. This software package is written for use in MATLAB. The testing is parametrized for different hardware parameters, such as the focal length of the imaging setup, the field of view (FOV) of the camera, angle measurement accuracy, distortion effects, etc., and therefore, can be applied to evaluate the performance of such algorithms in any star sensor. For its hardware implementation on our StarSense, we are currently porting the codes in form of functions written in C. This is done keeping in view its easy implementation on any star sensor electronics hardware.
Young Adult Smokers' Neural Response to Graphic Cigarette Warning Labels.
Green, Adam E; Mays, Darren; Falk, Emily B; Vallone, Donna; Gallagher, Natalie; Richardson, Amanda; Tercyak, Kenneth P; Abrams, David B; Niaura, Raymond S
2016-06-01
The study examined young adult smokers' neural response to graphic warning labels (GWLs) on cigarette packs using functional magnetic resonance imaging (fMRI). Nineteen young adult smokers ( M age 22.9, 52.6% male, 68.4% non-white, M 4.3 cigarettes/day) completed pre-scan, self-report measures of demographics, cigarette smoking behavior, and nicotine dependence, and an fMRI scanning session. During the scanning session participants viewed cigarette pack images (total 64 stimuli, viewed 4 seconds each) that varied based on the warning label (graphic or visually occluded control) and pack branding (branded or plain packaging) in an event-related experimental design. Participants reported motivation to quit (MTQ) in response to each image using a push-button control. Whole-brain blood oxygenation level-dependent (BOLD) functional images were acquired during the task. GWLs produced significantly greater self-reported MTQ than control warnings ( p < .001). Imaging data indicate stronger neural activation in response to GWLs than the control warnings at a cluster-corrected threshold p <.001 in medial prefrontal cortex, amygdala, medial temporal lobe, and occipital cortex. There were no significant differences in response to warnings on branded versus plain cigarette packages. In this sample of young adult smokers, GWLs promoted neural activation in brain regions involved in cognitive and affective decision-making and memory formation and the effects of GWLs did not differ on branded or plain cigarette packaging. These findings complement other recent neuroimaging GWL studies conducted with older adult smokers and with adolescents by demonstrating similar patterns of neural activation in response to GWLs among young adult smokers.
Yu, Hwan Hee; Song, Myung Wook; Kim, Tae-Kyung; Choi, Yun-Sang; Cho, Gyu Yong; Lee, Na-Kyoung; Paik, Hyun-Dong
2018-01-01
Abstract The objective of this study was to investigate comparison of physicochemical, microbiological, and sensory characteristics of Hanwoo eye of round by various packaging methods [wrapped packaging (WP), modified atmosphere packaging (MAP), vacuum packaging (VP) with three different vacuum films, and vacuum skin packaging (VSP)] at a small scale. Packaged Hanwoo beef samples were stored in refrigerated conditions (4±1°C) for 28 days. Packaged beef was sampled on days 0, 7, 14, 21, and 28. Physicochemical [pH, surface color, thiobarbituric acid reactive substances (TBARS), and volatile basic nitrogen (VBN) values], microbiological, and sensory analysis of packaged beef samples were performed. VP and VSP samples showed low TBARS and VBN values, and pH and surface color did not change substantially during the 28-day period. For VSP, total viable bacteria, psychrotrophic bacteria, lactic acid bacteria, and coliform counts were lower than those for other packaging systems. Salmonella spp. and Escherichia coli O157:H7 were not detected in any packaged beef samples. A sensory analysis showed that the scores for appearance, flavor, color, and overall acceptability did not change significantly until day 7. In total, VSP was effective with respect to significantly higher a* values, physicochemical stability, and microbial safety in Hanwoo packaging (p<0.05). PMID:29805283
A Description and Analysis of the German Packaging Take-Back System
ERIC Educational Resources Information Center
Nakajima, Nina; Vanderburg, Willem H.
2006-01-01
The German packaging ordinance is an example of legislated extended producer responsibility (also known as product take-back). Consumers can leave packaging with retailers, and packagers are required to pay for their recycling and disposal. It can be considered to be successful in reducing waste, spurring the redesign of packaging to be more…
DESIGN ANALYSIS FOR THE NAVAL SNF WASTE PACKAGE
DOE Office of Scientific and Technical Information (OSTI.GOV)
T.L. Mitchell
2000-05-31
The purpose of this analysis is to demonstrate the design of the naval spent nuclear fuel (SNF) waste package (WP) using the Waste Package Department's (WPD) design methodologies and processes described in the ''Waste Package Design Methodology Report'' (CRWMS M&O [Civilian Radioactive Waste Management System Management and Operating Contractor] 2000b). The calculations that support the design of the naval SNF WP will be discussed; however, only a sub-set of such analyses will be presented and shall be limited to those identified in the ''Waste Package Design Sensitivity Report'' (CRWMS M&O 2000c). The objective of this analysis is to describe themore » naval SNF WP design method and to show that the design of the naval SNF WP complies with the ''Naval Spent Nuclear Fuel Disposal Container System Description Document'' (CRWMS M&O 1999a) and Interface Control Document (ICD) criteria for Site Recommendation. Additional criteria for the design of the naval SNF WP have been outlined in Section 6.2 of the ''Waste Package Design Sensitivity Report'' (CRWMS M&O 2000c). The scope of this analysis is restricted to the design of the naval long WP containing one naval long SNF canister. This WP is representative of the WPs that will contain both naval short SNF and naval long SNF canisters. The following items are included in the scope of this analysis: (1) Providing a general description of the applicable design criteria; (2) Describing the design methodology to be used; (3) Presenting the design of the naval SNF waste package; and (4) Showing compliance with all applicable design criteria. The intended use of this analysis is to support Site Recommendation reports and assist in the development of WPD drawings. Activities described in this analysis were conducted in accordance with the technical product development plan (TPDP) ''Design Analysis for the Naval SNF Waste Package (CRWMS M&O 2000a).« less
Liu, Bin; Wu, Hao; Zhang, Deyuan; Wang, Xiaolong; Chou, Kuo-Chen
2017-02-21
To expedite the pace in conducting genome/proteome analysis, we have developed a Python package called Pse-Analysis. The powerful package can automatically complete the following five procedures: (1) sample feature extraction, (2) optimal parameter selection, (3) model training, (4) cross validation, and (5) evaluating prediction quality. All the work a user needs to do is to input a benchmark dataset along with the query biological sequences concerned. Based on the benchmark dataset, Pse-Analysis will automatically construct an ideal predictor, followed by yielding the predicted results for the submitted query samples. All the aforementioned tedious jobs can be automatically done by the computer. Moreover, the multiprocessing technique was adopted to enhance computational speed by about 6 folds. The Pse-Analysis Python package is freely accessible to the public at http://bioinformatics.hitsz.edu.cn/Pse-Analysis/, and can be directly run on Windows, Linux, and Unix.
Do isolated packaging variables influence consumers' attention and preferences?
García-Madariaga, Jesús; Blasco López, Maria-Francisca; Burgos, Ingrit Moya; Virto, Nuria Recuero
2018-04-25
Developments in neuroscience have provided the opportunity to know unconscious consumer reactions and acknowledge direct measures of cognitive constructs like attention. Given the ever-increasing concern over packaging's contribution to creating a positive first impression, the current research seeks to examine consumers' attention and declarative preferences regarding the three main different packaging attributes as isolated variables: images, texts and colours. The experiment exposed participants (N = 40) to 63 stimuli, which were based on modifications of the three main packaging attributes of three products of three different food categories. This study used electroencephalogram (EEG) and eye-tracking (ET) to measure attention, and a declarative test was employed to examine preference. First, the results presented herein show that the presence of visual elements, either images or texts on packages, increased the participants' level of attention. Second, the results reveal that colour modifications do not have a significant effect on participants' neurophysiological attention levels. Third, the results demonstrated that the neurophysiological effects among the participants do not necessarily coincide with their subjective evaluations of preference. Hence, this study increases awareness of the relevance of combining traditional market research tools that rely on explicit consumer responses with neuroscientific techniques. These findings indicate, first of all, that more research is needed to ascertain the extent to which consumers' neurophysiological outcomes correspond to their declarative preferences and second, that neurophysiological methods should be given more attention in research. Copyright © 2018 Elsevier Inc. All rights reserved.
Comparison of Perfusion CT Software to Predict the Final Infarct Volume After Thrombectomy.
Austein, Friederike; Riedel, Christian; Kerby, Tina; Meyne, Johannes; Binder, Andreas; Lindner, Thomas; Huhndorf, Monika; Wodarg, Fritz; Jansen, Olav
2016-09-01
Computed tomographic perfusion represents an interesting physiological imaging modality to select patients for reperfusion therapy in acute ischemic stroke. The purpose of our study was to determine the accuracy of different commercial perfusion CT software packages (Philips (A), Siemens (B), and RAPID (C)) to predict the final infarct volume (FIV) after mechanical thrombectomy. Single-institutional computed tomographic perfusion data from 147 mechanically recanalized acute ischemic stroke patients were postprocessed. Ischemic core and FIV were compared about thrombolysis in cerebral infarction (TICI) score and time interval to reperfusion. FIV was measured at follow-up imaging between days 1 and 8 after stroke. In 118 successfully recanalized patients (TICI 2b/3), a moderately to strongly positive correlation was observed between ischemic core and FIV. The highest accuracy and best correlation are shown in early and fully recanalized patients (Pearson r for A=0.42, B=0.64, and C=0.83; P<0.001). Bland-Altman plots and boxplots demonstrate smaller ranges in package C than in A and B. Significant differences were found between the packages about over- and underestimation of the ischemic core. Package A, compared with B and C, estimated more than twice as many patients with a malignant stroke profile (P<0.001). Package C best predicted hypoperfusion volume in nonsuccessfully recanalized patients. Our study demonstrates best accuracy and approximation between the results of a fully automated software (RAPID) and FIV, especially in early and fully recanalized patients. Furthermore, this software package overestimated the FIV to a significantly lower degree and estimated a malignant mismatch profile less often than other software. © 2016 American Heart Association, Inc.
What Do Cigarette Pack Colors Communicate to Smokers in the U.S.?
Bansal-Travers, Maansi; O’Connor, Richard; Fix, Brian V.; Cummings, K. Michael
2011-01-01
Background New legislation in the U.S. prohibits tobacco companies from labelling cigarette packs with terms such as ‘light,’ ‘mild,’ or ‘low’ after June 2010. However, experience from countries that have removed these descriptors suggests different terms, colors, or numbers communicating the same messages may replace them. Purpose The main purpose of this study was to examine how cigarette pack colors are perceived by smokers to correspond to different descriptive terms. Methods Newspaper advertisements and craigslist.org postings directed interested current smokers to a survey website. Eligible participants were shown an array of six cigarette packages (altered to remove all descriptive terms) and asked to link package images with their corresponding descriptive terms. Participants were then asked to identify which pack in the array they would choose if they were concerned with health, tar, nicotine, image, and taste. Results A total of 193 participants completed the survey from February to March 2008 (data were analyzed from May 2008 through November 2010). Participants were more accurate in matching descriptors to pack images for Marlboro brand cigarettes than for unfamiliar Peter Jackson brand (sold in Australia). Smokers overwhelmingly chose the ‘whitest’ pack if they were concerned about health, tar, and nicotine. Conclusions Smokers in the U.S. associate brand descriptors with colors. Further, white packaging appears to most influence perceptions of safety. Removal of descriptor terms but not the associated colors will be insufficient in eliminating misperceptions about the risks from smoking communicated to smokers through packaging. PMID:21565662
What do cigarette pack colors communicate to smokers in the U.S.?
Bansal-Travers, Maansi; O'Connor, Richard; Fix, Brian V; Cummings, K Michael
2011-06-01
New legislation in the U.S. prohibits tobacco companies from labeling cigarette packs with terms such as light, mild, or low after June 2010. However, experience from countries that have removed these descriptors suggests that different terms, colors, or numbers communicating the same messages may replace them. The main purpose of this study was to examine how cigarette pack colors are perceived by smokers to correspond to different descriptive terms. Newspaper advertisements and CraigsList.org postings directed interested current smokers to a survey website. Eligible participants were shown an array of six cigarette packages (altered to remove all descriptive terms) and asked to link package images with their corresponding descriptive terms. Participants were then asked to identify which pack in the array they would choose if they were concerned with health, tar, nicotine, image, and taste. A total of 193 participants completed the survey from February to March 2008 (data were analyzed from May 2008 through November 2010). Participants were more accurate in matching descriptors to pack images for Marlboro brand cigarettes than for unfamiliar Peter Jackson brand (sold in Australia). Smokers overwhelmingly chose the "whitest" pack if they were concerned about health, tar, and nicotine. Smokers in the U.S. associate brand descriptors with colors. Further, white packaging appears to most influence perceptions of safety. Removal of descriptor terms but not the associated colors will be insufficient in eliminating misperceptions about the risks from smoking communicated to smokers through packaging. Copyright © 2011 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.
Rane, Swati; Plassard, Andrew; Landman, Bennett A.; Claassen, Daniel O.; Donahue, Manus J.
2017-01-01
This work explores the feasibility of combining anatomical MRI data across two public repositories namely, the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and the Progressive Parkinson’s Markers Initiative (PPMI). We compared cortical thickness and subcortical volumes in cognitively normal older adults between datasets with distinct imaging parameters to assess if they would provide equivalent information. Three distinct datasets were identified. Major differences in data were scanner manufacturer and the use of magnetization inversion to enhance tissue contrast. Equivalent datasets, i.e., those providing similar volumetric measurements in cognitively normal controls, were identified in ADNI and PPMI. These were datasets obtained on the Siemens scanner with TI = 900 ms. Our secondary goal was to assess the agreement between subcortical volumes that are obtained with different software packages. Three subcortical measurement applications (FSL, FreeSurfer, and a recent multi-atlas approach) were compared. Our results show significant agreement in the measurements of caudate, putamen, pallidum, and hippocampus across the packages and poor agreement between measurements of accumbens and amygdala. This is likely due to their smaller size and lack of gray matter-white matter tissue contrast for accurate segmentation. This work provides a segue to combine imaging data from ADNI and PPMI to increase statistical power as well as to interrogate common mechanisms in disparate pathologies such as Alzheimer’s and Parkinson’s diseases. It lays the foundation for comparison of anatomical data acquired with disparate imaging parameters and analyzed with disparate software tools. Furthermore, our work partly explains the variability in the results of studies using different software packages. PMID:29756095
Rane, Swati; Plassard, Andrew; Landman, Bennett A; Claassen, Daniel O; Donahue, Manus J
2017-01-01
This work explores the feasibility of combining anatomical MRI data across two public repositories namely, the Alzheimer's Disease Neuroimaging Initiative (ADNI) and the Progressive Parkinson's Markers Initiative (PPMI). We compared cortical thickness and subcortical volumes in cognitively normal older adults between datasets with distinct imaging parameters to assess if they would provide equivalent information. Three distinct datasets were identified. Major differences in data were scanner manufacturer and the use of magnetization inversion to enhance tissue contrast. Equivalent datasets, i.e., those providing similar volumetric measurements in cognitively normal controls, were identified in ADNI and PPMI. These were datasets obtained on the Siemens scanner with TI = 900 ms. Our secondary goal was to assess the agreement between subcortical volumes that are obtained with different software packages. Three subcortical measurement applications (FSL, FreeSurfer, and a recent multi-atlas approach) were compared. Our results show significant agreement in the measurements of caudate, putamen, pallidum, and hippocampus across the packages and poor agreement between measurements of accumbens and amygdala. This is likely due to their smaller size and lack of gray matter-white matter tissue contrast for accurate segmentation. This work provides a segue to combine imaging data from ADNI and PPMI to increase statistical power as well as to interrogate common mechanisms in disparate pathologies such as Alzheimer's and Parkinson's diseases. It lays the foundation for comparison of anatomical data acquired with disparate imaging parameters and analyzed with disparate software tools. Furthermore, our work partly explains the variability in the results of studies using different software packages.
Safety analysis report for packaging (onsite) multicanister overpack cask
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, W.S.
1997-07-14
This safety analysis report for packaging (SARP) documents the safety of shipments of irradiated fuel elements in the MUlticanister Overpack (MCO) and MCO Cask for a highway route controlled quantity, Type B fissile package. This SARP evaluates the package during transfers of (1) water-filled MCOs from the K Basins to the Cold Vacuum Drying Facility (CVDF) and (2) sealed and cold vacuum dried MCOs from the CVDF in the 100 K Area to the Canister Storage Building in the 200 East Area.
3-D readout-electronics packaging for high-bandwidth massively paralleled imager
Kwiatkowski, Kris; Lyke, James
2007-12-18
Dense, massively parallel signal processing electronics are co-packaged behind associated sensor pixels. Microchips containing a linear or bilinear arrangement of photo-sensors, together with associated complex electronics, are integrated into a simple 3-D structure (a "mirror cube"). An array of photo-sensitive cells are disposed on a stacked CMOS chip's surface at a 45.degree. angle from light reflecting mirror surfaces formed on a neighboring CMOS chip surface. Image processing electronics are held within the stacked CMOS chip layers. Electrical connections couple each of said stacked CMOS chip layers and a distribution grid, the connections for distributing power and signals to components associated with each stacked CSMO chip layer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yuxuan; Bilheux, Jean -Christophe
ImagingReso is an open-source Python library that simulates the neutron resonance signal for neutron imaging measurements. By defining the sample information such as density, thickness in the neutron path, and isotopic ratios of the elemental composition of the material, this package plots the expected resonance peaks for a selected neutron energy range. Various sample types such as layers of single elements (Ag, Co, etc. in solid form), chemical compounds (UO 3, Gd 2O 3, etc.), or even multiple layers of both types can be plotted with this package. As a result, major plotting features include display of the transmission/attenuation inmore » wavelength, energy, and time scale, and show/hide elemental and isotopic contributions in the total resonance signal.« less
Bailey, Rachel L
2017-10-01
From an ecological perception perspective (Gibson, 1977), the availability of perceptual information alters what behaviors are more and less likely at different times. This study examines how perceptual information delivered in food advertisements and packaging alters the time course of information processing and decision making. Participants categorized images of food that varied in information delivered in terms of color, glossiness, and texture (e.g., food cues) before and after being exposed to a set of advertisements that also varied in this way. In general, items with more direct cues enhanced appetitive motivational processes, especially if they were also advertised with direct food cues. Individuals also chose to eat products that were packaged with more available direct food cues compared to opaque packaging.
MWASTools: an R/bioconductor package for metabolome-wide association studies.
Rodriguez-Martinez, Andrea; Posma, Joram M; Ayala, Rafael; Neves, Ana L; Anwar, Maryam; Petretto, Enrico; Emanueli, Costanza; Gauguier, Dominique; Nicholson, Jeremy K; Dumas, Marc-Emmanuel
2018-03-01
MWASTools is an R package designed to provide an integrated pipeline to analyse metabonomic data in large-scale epidemiological studies. Key functionalities of our package include: quality control analysis; metabolome-wide association analysis using various models (partial correlations, generalized linear models); visualization of statistical outcomes; metabolite assignment using statistical total correlation spectroscopy (STOCSY); and biological interpretation of metabolome-wide association studies results. The MWASTools R package is implemented in R (version > =3.4) and is available from Bioconductor: https://bioconductor.org/packages/MWASTools/. m.dumas@imperial.ac.uk. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
Imagers for digital still photography
NASA Astrophysics Data System (ADS)
Bosiers, Jan; Dillen, Bart; Draijer, Cees; Manoury, Erik-Jan; Meessen, Louis; Peters, Inge
2006-04-01
This paper gives an overview of the requirements for, and current state-of-the-art of, CCD and CMOS imagers for use in digital still photography. Four market segments will be reviewed: mobile imaging, consumer "point-and-shoot cameras", consumer digital SLR cameras and high-end professional camera systems. The paper will also present some challenges and innovations with respect to packaging, testing, and system integration.
NASA Astrophysics Data System (ADS)
Collier, J.; McDermott, C.; Lonergan, L.; McDermott, K.; Bellingham, P.
2017-12-01
Our understanding of continental breakup at volcanic margins has lagged behind that of non-volcanic margins in recent years. This is largely due to seismic imaging problems caused by the presence of thick packages of Seaward-Dipping Reflectors (SDRs) in the continent-ocean transition zone. These packages consist of interbedded tholeiitic lava flows, volcanic tuffs and terrestrial sediment that results in scattering, peg-leg multiples and defocusing of seismic energy. Here we analyse three ultra-long-offset (10.2 km), wide-bandwidth (5-100 Hz) seismic reflection profiles acquired by ION-GXT offshore South America during 2009-12 to gain new insights into the velocity structure of the SDRs and hence pattern of magmatism during continental breakup. We observe two seismic velocity patterns within the SDRs. The most landward packages show high velocity anomaly "bulls-eyes" of up to 1 km s-1. These highs occur where the stacked section shows them to thicken at the down-dip end of individual packages that are bounded by faults. All lines show 5-6 velocity highs spaced approximately 10 km apart. We interpret the velocity bulls-eyes as depleted mafic or ultramafic bodies that fed the sub-aerial tholeiitic lava flows during continental stretching. Similar relationships have been observed in outcrop onshore but have not been previously demonstrated in seismic data. The bulls-eye packages pass laterally into SDR packages that show no velocity highs. These packages are not associated with faulting and become more extensive going north towards the impact point of the Tristan da Cunha hotspot. This second type of SDR coincides with linear magnetic anomalies. We interpret these SDRs as the products of sub-aerial oceanic spreading similar to those seen on Iceland and described in the classic "Hinz model" and marine geophysical literature. Our work demonstrates that these SDRs are preceded by ones generated during an earlier phase of mechanical thinning of the continental crust. The pattern of volcanism during this first phase does not appear related to distance to the ancestral hotspot whereas the second phase does.
Development of a Mars Surface Imager
NASA Technical Reports Server (NTRS)
Squyres, Steve W.
1994-01-01
The Mars Surface Imager (MSI) is a multispectral, stereoscopic, panoramic imager that allows imaging of the full scene around a Mars lander from the lander body to the zenith. It has two functional components: panoramic imaging and sky imaging. In the most recent version of the MSI, called PIDDP-cam, a very long multi-line color CCD, an innovative high-performance drive system, and a state-of-the-art wavelet image compression code have been integrated into a single package. The requirements for the flight version of the MSI and the current design are presented.
Structural constraints in the packaging of bluetongue virus genomic segments
Burkhardt, Christiane; Sung, Po-Yu; Celma, Cristina C.
2014-01-01
The mechanism used by bluetongue virus (BTV) to ensure the sorting and packaging of its 10 genomic segments is still poorly understood. In this study, we investigated the packaging constraints for two BTV genomic segments from two different serotypes. Segment 4 (S4) of BTV serotype 9 was mutated sequentially and packaging of mutant ssRNAs was investigated by two newly developed RNA packaging assay systems, one in vivo and the other in vitro. Modelling of the mutated ssRNA followed by biochemical data analysis suggested that a conformational motif formed by interaction of the 5′ and 3′ ends of the molecule was necessary and sufficient for packaging. A similar structural signal was also identified in S8 of BTV serotype 1. Furthermore, the same conformational analysis of secondary structures for positive-sense ssRNAs was used to generate a chimeric segment that maintained the putative packaging motif but contained unrelated internal sequences. This chimeric segment was packaged successfully, confirming that the motif identified directs the correct packaging of the segment. PMID:24980574
Implementing a Java Based GUI for RICH Detector Analysis
NASA Astrophysics Data System (ADS)
Lendacky, Andrew; Voloshin, Andrew; Benmokhtar, Fatiha
2016-09-01
The CLAS12 detector at Thomas Jefferson National Accelerator Facility (TJNAF) is undergoing an upgrade. One of the improvements is the addition of a Ring Imaging Cherenkov (RICH) detector to improve particle identification in the 3-8 GeV/c momentum range. Approximately 400 multi anode photomultiplier tubes (MAPMTs) are going to be used to detect Cherenkov Radiation in the single photoelectron spectra (SPS). The SPS of each pixel of all MAPMTs have been fitted to a mathematical model of roughly 45 parameters for 4 HVs, 3 OD. Out of those parameters, 9 can be used to evaluate the PMTs performance and placement in the detector. To help analyze data when the RICH is operational, a GUI application was written in Java using Swing and detector packages from TJNAF. To store and retrieve the data, a MySQL database program was written in Java using the JDBC package. Using the database, the GUI pulls the values and produces histograms and graphs for a selected PMT at a specific HV and OD. The GUI will allow researchers to easily view a PMT's performance and efficiency to help with data analysis and ring reconstruction when the RICH is finished.
Novel Techniques for Millimeter-Wave Packages
NASA Technical Reports Server (NTRS)
Herman, Martin I.; Lee, Karen A.; Kolawa, Elzbieta A.; Lowry, Lynn E.; Tulintseff, Ann N.
1995-01-01
A new millimeter-wave package architecture with supporting electrical, mechanical and material science experiment and analysis is presented. This package is well suited for discrete devices, monolithic microwave integrated circuits (MMIC's) and multichip module (MCM) applications. It has low-loss wide-band RF transitions which are necessary to overcome manufacturing tolerances leading to lower per unit cost Potential applications of this new packaging architecture which go beyond the standard requirements of device protection include integration of antennas, compatibility to photonic networks and direct transitions to waveguide systems. Techniques for electromagnetic analysis, thermal control and hermetic sealing were explored. Three dimensional electromagnetic analysis was performed using a finite difference time-domain (FDTD) algorithm and experimentally verified for millimeter-wave package input and output transitions. New multi-material system concepts (AlN, Cu, and diamond thin films) which allow excellent surface finishes to be achieved with enhanced thermal management have been investigated. A new approach utilizing block copolymer coatings was employed to hermetically seal packages which met MIL STD-883.
Extreme Ultraviolet Imaging Telescope (EIT)
NASA Technical Reports Server (NTRS)
Lemen, J. R.; Freeland, S. L.
1997-01-01
Efforts concentrated on development and implementation of the SolarSoft (SSW) data analysis system. From an EIT analysis perspective, this system was designed to facilitate efficient reuse and conversion of software developed for Yohkoh/SXT and to take advantage of a large existing body of software developed by the SDAC, Yohkoh, and SOHO instrument teams. Another strong motivation for this system was to provide an EIT analysis environment which permits coordinated analysis of EIT data in conjunction with data from important supporting instruments, including Yohkoh/SXT and the other SOHO coronal instruments; CDS, SUMER, and LASCO. In addition, the SSW system will support coordinated EIT/TRACE analysis (by design) when TRACE data is available; TRACE launch is currently planned for March 1998. Working with Jeff Newmark, the Chianti software package (K.P. Dere et al) and UV /EUV data base was fully integrated into the SSW system to facilitate EIT temperature and emission analysis.
Labiner-Wolfe, Judith; Jordan Lin, Chung-Tung; Verrill, Linda
2010-01-01
Evaluate effect of low-carbohydrate claims on consumer perceptions about food products' healthfulness and helpfulness for weight management. Experiment in which participants were randomly assigned 1 of 12 front-of-package claim conditions on bread or a frozen dinner. Seven of the 12 conditions also included Nutrition Facts (NF) information. Internet. 4,320 members of a national on-line consumer panel. Exposure to images of a food package. Ratings on Likert scales about perceived healthfulness, helpfulness for weight management, and caloric content. Mean ratings by outcome measure, condition, and product were calculated. Ratings were also used as the dependent measure in analysis of variance models. Participants who saw front-of-package-only conditions rated products bearing low-carbohydrate claims as more helpful for weight management and lower in calories than the same products without a claim. Those who saw the bread with low-carbohydrate claims also rated it as more healthful than those who saw no claim. When the NF label was available and products had the same nutrition profile, participants rated products with low-carbohydrate claims the same as those with no claim. Consumers who do not use the NF panel may interpret low-carbohydrate claims to have meaning beyond the scope of the claim itself. Published by Elsevier Inc.
Bumstead, Matt; Liang, Kunyu; Hanta, Gregory; Hui, Lok Shu; Turak, Ayse
2018-01-24
Order classification is particularly important in photonics, optoelectronics, nanotechnology, biology, and biomedicine, as self-assembled and living systems tend to be ordered well but not perfectly. Engineering sets of experimental protocols that can accurately reproduce specific desired patterns can be a challenge when (dis)ordered outcomes look visually similar. Robust comparisons between similar samples, especially with limited data sets, need a finely tuned ensemble of accurate analysis tools. Here we introduce our numerical Mathematica package disLocate, a suite of tools to rapidly quantify the spatial structure of a two-dimensional dispersion of objects. The full range of tools available in disLocate give different insights into the quality and type of order present in a given dispersion, accessing the translational, orientational and entropic order. The utility of this package allows for researchers to extract the variation and confidence range within finite sets of data (single images) using different structure metrics to quantify local variation in disorder. Containing all metrics within one package allows for researchers to easily and rapidly extract many different parameters simultaneously, allowing robust conclusions to be drawn on the order of a given system. Quantifying the experimental trends which produce desired morphologies enables engineering of novel methods to direct self-assembly.
The cigarette pack as image: new evidence from tobacco industry documents.
Wakefield, M; Morley, C; Horan, J K; Cummings, K M
2002-03-01
To gain an understanding of the role of pack design in tobacco marketing. A search of tobacco company document sites using a list of specified search terms was undertaken during November 2000 to July 2001. Documents show that, especially in the context of tighter restrictions on conventional avenues for tobacco marketing, tobacco companies view cigarette packaging as an integral component of marketing strategy and a vehicle for (a) creating significant in-store presence at the point of purchase, and (b) communicating brand image. Market testing results indicate that such imagery is so strong as to influence smoker's taste ratings of the same cigarettes when packaged differently. Documents also reveal the careful balancing act that companies have employed in using pack design and colour to communicate the impression of lower tar or milder cigarettes, while preserving perceived taste and "satisfaction". Systematic and extensive research is carried out by tobacco companies to ensure that cigarette packaging appeals to selected target groups, including young adults and women. Cigarette pack design is an important communication device for cigarette brands and acts as an advertising medium. Many smokers are misled by pack design into thinking that cigarettes may be "safer". There is a need to consider regulation of cigarette packaging.
Robust biological parametric mapping: an improved technique for multimodal brain image analysis
NASA Astrophysics Data System (ADS)
Yang, Xue; Beason-Held, Lori; Resnick, Susan M.; Landman, Bennett A.
2011-03-01
Mapping the quantitative relationship between structure and function in the human brain is an important and challenging problem. Numerous volumetric, surface, region of interest and voxelwise image processing techniques have been developed to statistically assess potential correlations between imaging and non-imaging metrics. Recently, biological parametric mapping has extended the widely popular statistical parametric approach to enable application of the general linear model to multiple image modalities (both for regressors and regressands) along with scalar valued observations. This approach offers great promise for direct, voxelwise assessment of structural and functional relationships with multiple imaging modalities. However, as presented, the biological parametric mapping approach is not robust to outliers and may lead to invalid inferences (e.g., artifactual low p-values) due to slight mis-registration or variation in anatomy between subjects. To enable widespread application of this approach, we introduce robust regression and robust inference in the neuroimaging context of application of the general linear model. Through simulation and empirical studies, we demonstrate that our robust approach reduces sensitivity to outliers without substantial degradation in power. The robust approach and associated software package provides a reliable way to quantitatively assess voxelwise correlations between structural and functional neuroimaging modalities.
Uncooled LWIR imaging: applications and market analysis
NASA Astrophysics Data System (ADS)
Takasawa, Satomi
2015-05-01
The evolution of infrared (IR) imaging sensor technology for defense market has played an important role in developing commercial market, as dual use of the technology has expanded. In particular, technologies of both reduction in pixel pitch and vacuum package have drastically evolved in the area of uncooled Long-Wave IR (LWIR; 8-14 μm wavelength region) imaging sensor, increasing opportunity to create new applications. From the macroscopic point of view, the uncooled LWIR imaging market is divided into two areas. One is a high-end market where uncooled LWIR imaging sensor with sensitivity as close to that of cooled one as possible is required, while the other is a low-end market which is promoted by miniaturization and reduction in price. Especially, in the latter case, approaches towards consumer market have recently appeared, such as applications of uncooled LWIR imaging sensors to night visions for automobiles and smart phones. The appearance of such a kind of commodity surely changes existing business models. Further technological innovation is necessary for creating consumer market, and there will be a room for other companies treating components and materials such as lens materials and getter materials and so on to enter into the consumer market.
Gruber, Bernd; Unmack, Peter J; Berry, Oliver F; Georges, Arthur
2018-05-01
Although vast technological advances have been made and genetic software packages are growing in number, it is not a trivial task to analyse SNP data. We announce a new r package, dartr, enabling the analysis of single nucleotide polymorphism data for population genomic and phylogenomic applications. dartr provides user-friendly functions for data quality control and marker selection, and permits rigorous evaluations of conformation to Hardy-Weinberg equilibrium, gametic-phase disequilibrium and neutrality. The package reports standard descriptive statistics, permits exploration of patterns in the data through principal components analysis and conducts standard F-statistics, as well as basic phylogenetic analyses, population assignment, isolation by distance and exports data to a variety of commonly used downstream applications (e.g., newhybrids, faststructure and phylogeny applications) outside of the r environment. The package serves two main purposes: first, a user-friendly approach to lower the hurdle to analyse such data-therefore, the package comes with a detailed tutorial targeted to the r beginner to allow data analysis without requiring deep knowledge of r. Second, we use a single, well-established format-genlight from the adegenet package-as input for all our functions to avoid data reformatting. By strictly using the genlight format, we hope to facilitate this format as the de facto standard of future software developments and hence reduce the format jungle of genetic data sets. The dartr package is available via the r CRAN network and GitHub. © 2017 John Wiley & Sons Ltd.
JUDE: An Ultraviolet Imaging Telescope pipeline
NASA Astrophysics Data System (ADS)
Murthy, J.; Rahna, P. T.; Sutaria, F.; Safonova, M.; Gudennavar, S. B.; Bubbly, S. G.
2017-07-01
The Ultraviolet Imaging Telescope (UVIT) was launched as part of the multi-wavelength Indian AstroSat mission on 28 September, 2015 into a low Earth orbit. A 6-month performance verification (PV) phase ended in March 2016, and the instrument is now in the general observing phase. UVIT operates in three channels: visible, near-ultraviolet (NUV) and far-ultraviolet (FUV), each with a choice of broad and narrow band filters, and has NUV and FUV gratings for low-resolution spectroscopy. We have written a software package (JUDE) to convert the Level 1 data from UVIT into scientifically useful photon lists and images. The routines are written in the GNU Data Language (GDL) and are compatible with the IDL software package. We use these programs in our own scientific work, and will continue to update the programs as we gain better understanding of the UVIT instrument and its performance. We have released JUDE under an Apache License.
Toward the greening of nuclear energy: A content analysis of nuclear energy frames from 1991 to 2008
NASA Astrophysics Data System (ADS)
Miller, Sonya R.
Framing theory has emerged as one of the predominant theories employed in mass communications research in the 21st century. Frames are identified as interpretive packages for content where some issue attributes are highlighted over other attributes. While framing effects studies appear plentiful, longitudinal studies assessing trends in dominant framing packages and story elements for an issue appear to be less understood. Through content analysis, this study examines dominant frame packages, story elements, headline tone, story tone, stereotypes, and source attribution for nuclear energy from 1991-2008 in the New York Times, USA Today, the Wall Street Journal, and the Washington Post. Unlike many content analysis studies, this study compares intercoder reliability among three indices---percentage agreement, proportional reduction of loss and Scott's Pi. The newspapers represented in this study possess a commonality in the types of dominant frames packages employed. Significant dominant frame packages among the four newspapers include human/health, proliferation, procedural, and marketplace. While the procedural frame package was more likely to appear prior to the 1997 Kyoto Protocol, the proliferation frame packaged was more likely to appear after the Kyoto Protol. Over time, the sustainable frame package demonstrated increased significance. This study is part of the growing literature regarding the function of frames over time.
AstroBlend: An astrophysical visualization package for Blender
NASA Astrophysics Data System (ADS)
Naiman, J. P.
2016-04-01
The rapid growth in scale and complexity of both computational and observational astrophysics over the past decade necessitates efficient and intuitive methods for examining and visualizing large datasets. Here, I present AstroBlend, an open-source Python library for use within the three dimensional modeling software, Blender. While Blender has been a popular open-source software among animators and visual effects artists, in recent years it has also become a tool for visualizing astrophysical datasets. AstroBlend combines the three dimensional capabilities of Blender with the analysis tools of the widely used astrophysical toolset, yt, to afford both computational and observational astrophysicists the ability to simultaneously analyze their data and create informative and appealing visualizations. The introduction of this package includes a description of features, work flow, and various example visualizations. A website - www.astroblend.com - has been developed which includes tutorials, and a gallery of example images and movies, along with links to downloadable data, three dimensional artistic models, and various other resources.
Stanley, J; Townsend, R
1986-01-01
Intact recombinant DNAs containing single copies of either component of the cassava latent virus genome can elicit infection when mechanically inoculated to host plants in the presence of the appropriate second component. Characterisation of infectious mutant progeny viruses, by analysis of virus-specific supercoiled DNA intermediates, indicates that most if not all of the cloning vector has been deleted, achieved at least in some cases by intermolecular recombination in vivo between DNAs 1 and 2. Significant rearrangements within the intergenic region of DNA 2, predominantly external to the common region, can be tolerated without loss of infectivity suggesting a somewhat passive role in virus multiplication for the sequences in question. Although packaging constraints might impose limits on the amount of DNA within geminate particles, isolation of an infectious coat protein mutant defective in virion production suggests that packaging is not essential for systemic spread of the viral DNA. Images PMID:2875435
Computer assisted learning (CAL) of oral manifestations of HIV disease.
Porter, S R; Telford, A; Chandler, K; Furber, S; Williams, J; Price, S; Scully, C; Triantos, D; Bain, L
1996-09-07
General dental practitioners (GDPs) in the UK may wish additional education on relevant aspects of human immunodeficiency virus (HIV) disease. The aim of the present study was to develop and assess a computer assisted learning package on the oral manifestations of HIV disease of relevance to GDPs. A package was developed using a commercially-available software development tool and assessed by a group of 75 GDPs interested in education and computers. Fifty-four (72%) of the GDPs completed a self-administered questionnaire of their opinions of the package. The majority reported the package to be easy to load and run, that it provided clear instructions and displays, and that it was a more effective educational tool than videotapes, audiotapes, professional journals and textbooks, and of similar benefit as post-graduate courses. The GDPs often commented favourably on the effectiveness of the clinical images and use of questions and answers, although some had criticisms of these and other aspects of the package. As a consequence of this investigation the package has been modified and distributed to GDPs in England and Wales.
New Techniques for High-contrast Imaging with ADI: The ACORNS-ADI SEEDS Data Reduction Pipeline
NASA Astrophysics Data System (ADS)
Brandt, Timothy D.; McElwain, Michael W.; Turner, Edwin L.; Abe, L.; Brandner, W.; Carson, J.; Egner, S.; Feldt, M.; Golota, T.; Goto, M.; Grady, C. A.; Guyon, O.; Hashimoto, J.; Hayano, Y.; Hayashi, M.; Hayashi, S.; Henning, T.; Hodapp, K. W.; Ishii, M.; Iye, M.; Janson, M.; Kandori, R.; Knapp, G. R.; Kudo, T.; Kusakabe, N.; Kuzuhara, M.; Kwon, J.; Matsuo, T.; Miyama, S.; Morino, J.-I.; Moro-Martín, A.; Nishimura, T.; Pyo, T.-S.; Serabyn, E.; Suto, H.; Suzuki, R.; Takami, M.; Takato, N.; Terada, H.; Thalmann, C.; Tomono, D.; Watanabe, M.; Wisniewski, J. P.; Yamada, T.; Takami, H.; Usuda, T.; Tamura, M.
2013-02-01
We describe Algorithms for Calibration, Optimized Registration, and Nulling the Star in Angular Differential Imaging (ACORNS-ADI), a new, parallelized software package to reduce high-contrast imaging data, and its application to data from the SEEDS survey. We implement several new algorithms, including a method to register saturated images, a trimmed mean for combining an image sequence that reduces noise by up to ~20%, and a robust and computationally fast method to compute the sensitivity of a high-contrast observation everywhere on the field of view without introducing artificial sources. We also include a description of image processing steps to remove electronic artifacts specific to Hawaii2-RG detectors like the one used for SEEDS, and a detailed analysis of the Locally Optimized Combination of Images (LOCI) algorithm commonly used to reduce high-contrast imaging data. ACORNS-ADI is written in python. It is efficient and open-source, and includes several optional features which may improve performance on data from other instruments. ACORNS-ADI requires minimal modification to reduce data from instruments other than HiCIAO. It is freely available for download at www.github.com/t-brandt/acorns-adi under a Berkeley Software Distribution (BSD) license. Based on data collected at Subaru Telescope, which is operated by the National Astronomical Observatory of Japan.
Imaging single cells in a beam of live cyanobacteria with an X-ray laser (CXIDB ID 26)
Schot, Gijs, vander
2015-02-10
This entry contains ten diffraction patterns, and reconstructions images, of individual living Cyanobium gracile cells, imaged using 517 eV X-rays from the LCLS XFEL. The Hawk software package was used for phasing. The Uppsala aerosol injector was used for sample injection, assuring very low noise levels. The cells come from various stages of the cell cycle, and were imaged in random orientations.
NASA Astrophysics Data System (ADS)
Tajik, Jehangir K.; Kugelmass, Steven D.; Hoffman, Eric A.
1993-07-01
We have developed a method utilizing x-ray CT for relating pulmonary perfusion to global and regional anatomy, allowing for detailed study of structure to function relationships. A thick slice, high temporal resolution mode is used to follow a bolus contrast agent for blood flow evaluation and is fused with a high spatial resolution, thin slice mode to obtain structure- function detail. To aid analysis of blood flow, we have developed a software module, for our image analysis package (VIDA), to produce the combined structure-function image. Color coded images representing blood flow, mean transit time, regional tissue content, regional blood volume, regional air content, etc. are generated and imbedded in the high resolution volume image. A text file containing these values along with a voxel's 3-D coordinates is also generated. User input can be minimized to identifying the location of the pulmonary artery from which the input function to a blood flow model is derived. Any flow model utilizing one input and one output function can be easily added to a user selectable list. We present examples from our physiologic based research findings to demonstrate the strengths of combining dynamic CT and HRCT relative to other scanning modalities to uniquely characterize pulmonary normal and pathophysiology.
Ferritin heavy chain as a molecular imaging reporter gene in glioma xenografts.
Cheng, Sen; Mi, Ruifang; Xu, Yu; Jin, Guishan; Zhang, Junwen; Zhou, Yiqiang; Chen, Zhengguang; Liu, Fusheng
2017-06-01
The development of glioma therapy in clinical practice (e.g., gene therapy) calls for efficiently visualizing and tracking glioma cells in vivo. Human ferritin heavy chain is a novel gene reporter in magnetic resonance imaging. This study proposes hFTH as a reporter gene for MR molecular imaging in glioma xenografts. Rat C6 glioma cells were infected by packaged lentivirus carrying hFTH and EGFP genes and obtained by fluorescence-activated cell sorting. The iron-loaded ability was analyzed by the total iron reagent kit. Glioma nude mouse models were established subcutaneously and intracranially. Then, in vivo tumor bioluminescence was performed via the IVIS spectrum imaging system. The MR imaging analysis was analyzed on a 7T animal MRI scanner. Finally, the expression of hFTH was analyzed by western blotting and histological analysis. Stable glioma cells carrying hFTH and EGFP reporter genes were successfully obtained. The intracellular iron concentration was increased without impairing the cell proliferation rate. Glioma cells overexpressing hFTH showed significantly decreased signal intensity on T 2 -weighted MRI both in vitro and in vivo. EGFP fluorescent imaging could also be detected in the subcutaneous and intracranial glioma xenografts. Moreover, the expression of the transferritin receptor was significantly increased in glioma cells carrying the hFTH reporter gene. Our study illustrated that hFTH generated cellular MR imaging contrast efficiently in glioma via regulating the expression of transferritin receptor. This might be a useful reporter gene in cell tracking and MR molecular imaging for glioma diagnosis, gene therapy and tumor metastasis.
Selections from 2017: Image Processing with AstroImageJ
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2017-12-01
Editors note:In these last two weeks of 2017, well be looking at a few selections that we havent yet discussed on AAS Nova from among the most-downloaded paperspublished in AAS journals this year. The usual posting schedule will resume in January.AstroImageJ: Image Processing and Photometric Extraction for Ultra-Precise Astronomical Light CurvesPublished January2017The AIJ image display. A wide range of astronomy specific image display options and image analysis tools are available from the menus, quick access icons, and interactive histogram. [Collins et al. 2017]Main takeaway:AstroImageJ is a new integrated software package presented in a publication led byKaren Collins(Vanderbilt University,Fisk University, andUniversity of Louisville). Itenables new users even at the level of undergraduate student, high school student, or amateur astronomer to quickly start processing, modeling, and plotting astronomical image data.Why its interesting:Science doesnt just happen the momenta telescope captures a picture of a distantobject. Instead, astronomical images must firstbe carefully processed to clean up thedata, and this data must then be systematically analyzed to learn about the objects within it. AstroImageJ as a GUI-driven, easily installed, public-domain tool is a uniquelyaccessible tool for thisprocessing and analysis, allowing even non-specialist users to explore and visualizeastronomical data.Some features ofAstroImageJ:(as reported by Astrobites)Image calibration:generate master flat, dark, and bias framesImage arithmetic:combineimages viasubtraction, addition, division, multiplication, etc.Stack editing:easily perform operations on a series of imagesImage stabilization and image alignment featuresPrecise coordinate converters:calculate Heliocentric and Barycentric Julian DatesWCS coordinates:determine precisely where atelescope was pointed for an image by PlateSolving using Astronomy.netMacro and plugin support:write your own macrosMulti-aperture photometry with interactive light curve fitting:plot light curves of a star in real timeCitationKaren A. Collins et al 2017 AJ 153 77. doi:10.3847/1538-3881/153/2/77
NASA Technical Reports Server (NTRS)
Quattrochi, Dale A.; Emerson, Charles W.; Lam, Nina Siu-Ngan; Laymon, Charles A.
1997-01-01
The Image Characterization And Modeling System (ICAMS) is a public domain software package that is designed to provide scientists with innovative spatial analytical tools to visualize, measure, and characterize landscape patterns so that environmental conditions or processes can be assessed and monitored more effectively. In this study ICAMS has been used to evaluate how changes in fractal dimension, as a landscape characterization index, and resolution, are related to differences in Landsat images collected at different dates for the same area. Landsat Thematic Mapper (TM) data obtained in May and August 1993 over a portion of the Great Basin Desert in eastern Nevada were used for analysis. These data represent contrasting periods of peak "green-up" and "dry-down" for the study area. The TM data sets were converted into Normalized Difference Vegetation Index (NDVI) images to expedite analysis of differences in fractal dimension between the two dates. These NDVI images were also resampled to resolutions of 60, 120, 240, 480, and 960 meters from the original 30 meter pixel size, to permit an assessment of how fractal dimension varies with spatial resolution. Tests of fractal dimension for two dates at various pixel resolutions show that the D values in the August image become increasingly more complex as pixel size increases to 480 meters. The D values in the May image show an even more complex relationship to pixel size than that expressed in the August image. Fractal dimension for a difference image computed for the May and August dates increase with pixel size up to a resolution of 120 meters, and then decline with increasing pixel size. This means that the greatest complexity in the difference images occur around a resolution of 120 meters, which is analogous to the operational domain of changes in vegetation and snow cover that constitute differences between the two dates.
VizieR Online Data Catalog: OGLE II SMC eclipsing binaries (Wyrzykowski+, 2004)
NASA Astrophysics Data System (ADS)
Wyrzykowski, L.; Udalski, A.; Kubiak, M.; Szymanski, M. K.; Zebrun, K.; Soszinski, I.; Wozniak, P. R.; Pietrzynski, G.; Szewczyk, O.
2009-03-01
We present new version of the OGLE-II catalog of eclipsing binary stars detected in the Small Magellanic Cloud, based on Difference Image Analysis catalog of variable stars in the Magellanic Clouds containing data collected from 1997 to 2000. We found 1351 eclipsing binary stars in the central 2.4 square degree area of the SMC. 455 stars are newly discovered objects, not found in the previous release of the catalog. The eclipsing objects were selected with the automatic search algorithm based on the artificial neural network. The full catalog with individual photometry is accessible from the OGLE INTERNET archive, at ftp://sirius.astrouw.edu.pl/ogle/ogle2/var_stars/smc/ecl . Regular observations of the SMC fields started on June 26, 1997 and covered about 2.4 square degrees of central parts of the SMC. Reductions of the photometric data collected up to the end of May 2000 were performed with the Difference Image Analysis (DIA) package. (1 data file).
CellProfiler and KNIME: open source tools for high content screening.
Stöter, Martin; Niederlein, Antje; Barsacchi, Rico; Meyenhofer, Felix; Brandl, Holger; Bickle, Marc
2013-01-01
High content screening (HCS) has established itself in the world of the pharmaceutical industry as an essential tool for drug discovery and drug development. HCS is currently starting to enter the academic world and might become a widely used technology. Given the diversity of problems tackled in academic research, HCS could experience some profound changes in the future, mainly with more imaging modalities and smart microscopes being developed. One of the limitations in the establishment of HCS in academia is flexibility and cost. Flexibility is important to be able to adapt the HCS setup to accommodate the multiple different assays typical of academia. Many cost factors cannot be avoided, but the costs of the software packages necessary to analyze large datasets can be reduced by using Open Source software. We present and discuss the Open Source software CellProfiler for image analysis and KNIME for data analysis and data mining that provide software solutions which increase flexibility and keep costs low.
Classification Algorithms for Big Data Analysis, a Map Reduce Approach
NASA Astrophysics Data System (ADS)
Ayma, V. A.; Ferreira, R. S.; Happ, P.; Oliveira, D.; Feitosa, R.; Costa, G.; Plaza, A.; Gamba, P.
2015-03-01
Since many years ago, the scientific community is concerned about how to increase the accuracy of different classification methods, and major achievements have been made so far. Besides this issue, the increasing amount of data that is being generated every day by remote sensors raises more challenges to be overcome. In this work, a tool within the scope of InterIMAGE Cloud Platform (ICP), which is an open-source, distributed framework for automatic image interpretation, is presented. The tool, named ICP: Data Mining Package, is able to perform supervised classification procedures on huge amounts of data, usually referred as big data, on a distributed infrastructure using Hadoop MapReduce. The tool has four classification algorithms implemented, taken from WEKA's machine learning library, namely: Decision Trees, Naïve Bayes, Random Forest and Support Vector Machines (SVM). The results of an experimental analysis using a SVM classifier on data sets of different sizes for different cluster configurations demonstrates the potential of the tool, as well as aspects that affect its performance.
Digital holographic microscopy
NASA Astrophysics Data System (ADS)
Barkley, Solomon; Dimiduk, Thomas; Manoharan, Vinothan
Digital holographic microscopy is a 3D optical imaging technique with high temporal ( ms) and spatial ( 10 nm) precision. However, its adoption as a characterization technique has been limited due to the inherent difficulty of recovering 3D data from the holograms. Successful analysis has traditionally required substantial knowledge about the sample being imaged (for example, the approximate positions of particles in the field of view), as well as expertise in scattering theory. To overcome the obstacles to widespread adoption of holographic microscopy, we developed HoloPy - an open source python package for analysis of holograms and scattering data. HoloPy uses Bayesian statistical methods to determine the geometry and properties of discrete scatterers from raw holograms. We demonstrate the use of HoloPy to measure the dynamics of colloidal particles at interfaces, to ascertain the structures of self-assembled colloidal particles, and to track freely swimming bacteria. The HoloPy codebase is thoroughly tested and well-documented to facilitate use by the broader experimental community. This research is supported by NSF Grant DMR-1306410 and NSERC.
CONRAD—A software framework for cone-beam imaging in radiology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maier, Andreas; Choi, Jang-Hwan; Riess, Christian
2013-11-15
Purpose: In the community of x-ray imaging, there is a multitude of tools and applications that are used in scientific practice. Many of these tools are proprietary and can only be used within a certain lab. Often the same algorithm is implemented multiple times by different groups in order to enable comparison. In an effort to tackle this problem, the authors created CONRAD, a software framework that provides many of the tools that are required to simulate basic processes in x-ray imaging and perform image reconstruction with consideration of nonlinear physical effects.Methods: CONRAD is a Java-based state-of-the-art software platform withmore » extensive documentation. It is based on platform-independent technologies. Special libraries offer access to hardware acceleration such as OpenCL. There is an easy-to-use interface for parallel processing. The software package includes different simulation tools that are able to generate up to 4D projection and volume data and respective vector motion fields. Well known reconstruction algorithms such as FBP, DBP, and ART are included. All algorithms in the package are referenced to a scientific source.Results: A total of 13 different phantoms and 30 processing steps have already been integrated into the platform at the time of writing. The platform comprises 74.000 nonblank lines of code out of which 19% are used for documentation. The software package is available for download at http://conrad.stanford.edu. To demonstrate the use of the package, the authors reconstructed images from two different scanners, a table top system and a clinical C-arm system. Runtimes were evaluated using the RabbitCT platform and demonstrate state-of-the-art runtimes with 2.5 s for the 256 problem size and 12.4 s for the 512 problem size.Conclusions: As a common software framework, CONRAD enables the medical physics community to share algorithms and develop new ideas. In particular this offers new opportunities for scientific collaboration and quantitative performance comparison between the methods of different groups.« less
NASA Astrophysics Data System (ADS)
Krinitskiy, Mikhail; Sinitsyn, Alexey; Gulev, Sergey
2014-05-01
Cloud fraction is a critical parameter for the accurate estimation of short-wave and long-wave radiation - one of the most important surface fluxes over sea and land. Massive estimates of the total cloud cover as well as cloud amount for different layers of clouds are available from visual observations, satellite measurements and reanalyses. However, these data are subject of different uncertainties and need continuous validation against highly accurate in-situ measurements. Sky imaging with high resolution fish eye camera provides an excellent opportunity for collecting cloud cover data supplemented with additional characteristics hardly available from routine visual observations (e.g. structure of cloud cover under broken cloud conditions, parameters of distribution of cloud dimensions). We present operational automatic observational package which is based on fish eye camera taking sky images with high resolution (up to 1Hz) in time and a spatial resolution of 968x648px. This spatial resolution has been justified as an optimal by several sensitivity experiments. For the use of the package at research vessel when the horizontal positioning becomes critical, a special extension of the hardware and software to the package has been developed. These modules provide the explicit detection of the optimal moment for shooting. For the post processing of sky images we developed a software realizing the algorithm of the filtering of sunburn effect in case of small and moderate could cover and broken cloud conditions. The same algorithm accurately quantifies the cloud fraction by analyzing color mixture for each point and introducing the so-called "grayness rate index" for every pixel. The accuracy of the algorithm has been tested using the data collected during several campaigns in 2005-2011 in the North Atlantic Ocean. The collection of images included more than 3000 images for different cloud conditions supplied with observations of standard parameters. The system is fully autonomous and has a block for digital data collection at the hard disk. The system has been tested for a wide range of open ocean cloud conditions and we will demonstrate some pilot results of data processing and physical interpretation of fractional cloud cover estimation.
NASA Astrophysics Data System (ADS)
Andreon, S.; Gargiulo, G.; Longo, G.; Tagliaferri, R.; Capuano, N.
2000-12-01
Astronomical wide-field imaging performed with new large-format CCD detectors poses data reduction problems of unprecedented scale, which are difficult to deal with using traditional interactive tools. We present here NExt (Neural Extractor), a new neural network (NN) based package capable of detecting objects and performing both deblending and star/galaxy classification in an automatic way. Traditionally, in astronomical images, objects are first distinguished from the noisy background by searching for sets of connected pixels having brightnesses above a given threshold; they are then classified as stars or as galaxies through diagnostic diagrams having variables chosen according to the astronomer's taste and experience. In the extraction step, assuming that images are well sampled, NExt requires only the simplest a priori definition of `what an object is' (i.e. it keeps all structures composed of more than one pixel) and performs the detection via an unsupervised NN, approaching detection as a clustering problem that has been thoroughly studied in the artificial intelligence literature. The first part of the NExt procedure consists of an optimal compression of the redundant information contained in the pixels via a mapping from pixel intensities to a subspace individualized through principal component analysis. At magnitudes fainter than the completeness limit, stars are usually almost indistinguishable from galaxies, and therefore the parameters characterizing the two classes do not lie in disconnected subspaces, thus preventing the use of unsupervised methods. We therefore adopted a supervised NN (i.e. a NN that first finds the rules to classify objects from examples and then applies them to the whole data set). In practice, each object is classified depending on its membership of the regions mapping the input feature space in the training set. In order to obtain an objective and reliable classification, instead of using an arbitrarily defined set of features we use a NN to select the most significant features among the large number of measured ones, and then we use these selected features to perform the classification task. In order to optimize the performance of the system, we implemented and tested several different models of NN. The comparison of the NExt performance with that of the best detection and classification package known to the authors (SExtractor) shows that NExt is at least as effective as the best traditional packages.
Using Android-Based Educational Game for Learning Colloid Material
NASA Astrophysics Data System (ADS)
Sari, S.; Anjani, R.; Farida, I.; Ramdhani, M. A.
2017-09-01
This research is based on the importance of the development of student’s chemical literacy on Colloid material using Android-based educational game media. Educational game products are developed through research and development design. In the analysis phase, material analysis is performed to generate concept maps, determine chemical literacy indicators, game strategies and set game paths. In the design phase, product packaging is carried out, then validation and feasibility test are performed. Research produces educational game based on Android that has the characteristics that is: Colloid material presented in 12 levels of game in the form of questions and challenges, presents visualization of discourse, images and animation contextually to develop the process of thinking and attitude. Based on the analysis of validation and trial results, the product is considered feasible to use.
Dautzenberg, B
2018-06-01
For years, the tobacco industry has organized the inoculation of tobacco addiction to adolescents. The analysis of a 1973 RJReynols ® document identified ten physical and psychological factors in order to increase the number of young users for a brand of cigarettes. These young people are classified into three groups: pre-smokers, learners and smokers. The taste for pre-smokers and learners and nicotine for smokers are main physical parameters. The industry clearly knows that tobacco is mainly consumed because of nicotine addiction, so it is necessary to make adolescents addict. It is interesting to note that cigarette pack was in 1973 a positive factor to attract young smokers, whereas now with the arrival of the neutral packaging, the tobacco industry declares that packaging has no influence to attract teenagers ! Of the psychological factors, the only negative factor is the self-image of the smoker. The tobacco industry already recognized in 1973 that smokers were unhappy about smoking. For learners, self-image and the experience of adults are most important factor, which is why the industry strives to create a positive image and convey message that smoking initiation is a ritual to become adult. According to the tobacco industry, stress and alleviation of boredom are also important points in turning pre-smokers into learners and learners into smokers. This article aims to provide practical tools for understanding industry initiatives targeting adolescents. The attached tool can be used by the teens or adults involved to understand the optimization of teenagers tobacco marketing. Copyright © 2018 Elsevier Masson SAS. All rights reserved.
Thornberg, Steven M [Peralta, NM
2012-07-31
A system is provided for testing the hermeticity of a package, such as a microelectromechanical systems package containing a sealed gas volume, with a sampling device that has the capability to isolate the package and breach the gas seal connected to a pulse valve that can controllably transmit small volumes down to 2 nanoliters to a gas chamber for analysis using gas chromatography/mass spectroscopy diagnostics.
NASA Astrophysics Data System (ADS)
Alexander, A.; DeBlois, F.; Stroian, G.; Al-Yahya, K.; Heath, E.; Seuntjens, J.
2007-07-01
Radiotherapy research lacks a flexible computational research environment for Monte Carlo (MC) and patient-specific treatment planning. The purpose of this study was to develop a flexible software package on low-cost hardware with the aim of integrating new patient-specific treatment planning with MC dose calculations suitable for large-scale prospective and retrospective treatment planning studies. We designed the software package 'McGill Monte Carlo treatment planning' (MMCTP) for the research development of MC and patient-specific treatment planning. The MMCTP design consists of a graphical user interface (GUI), which runs on a simple workstation connected through standard secure-shell protocol to a cluster for lengthy MC calculations. Treatment planning information (e.g., images, structures, beam geometry properties and dose distributions) is converted into a convenient MMCTP local file storage format designated, the McGill RT format. MMCTP features include (a) DICOM_RT, RTOG and CADPlan CART format imports; (b) 2D and 3D visualization views for images, structure contours, and dose distributions; (c) contouring tools; (d) DVH analysis, and dose matrix comparison tools; (e) external beam editing; (f) MC transport calculation from beam source to patient geometry for photon and electron beams. The MC input files, which are prepared from the beam geometry properties and patient information (e.g., images and structure contours), are uploaded and run on a cluster using shell commands controlled from the MMCTP GUI. The visualization, dose matrix operation and DVH tools offer extensive options for plan analysis and comparison between MC plans and plans imported from commercial treatment planning systems. The MMCTP GUI provides a flexible research platform for the development of patient-specific MC treatment planning for photon and electron external beam radiation therapy. The impact of this tool lies in the fact that it allows for systematic, platform-independent, large-scale MC treatment planning for different treatment sites. Patient recalculations were performed to validate the software and ensure proper functionality.
Taminau, Jonatan; Meganck, Stijn; Lazar, Cosmin; Steenhoff, David; Coletta, Alain; Molter, Colin; Duque, Robin; de Schaetzen, Virginie; Weiss Solís, David Y; Bersini, Hugues; Nowé, Ann
2012-12-24
With an abundant amount of microarray gene expression data sets available through public repositories, new possibilities lie in combining multiple existing data sets. In this new context, analysis itself is no longer the problem, but retrieving and consistently integrating all this data before delivering it to the wide variety of existing analysis tools becomes the new bottleneck. We present the newly released inSilicoMerging R/Bioconductor package which, together with the earlier released inSilicoDb R/Bioconductor package, allows consistent retrieval, integration and analysis of publicly available microarray gene expression data sets. Inside the inSilicoMerging package a set of five visual and six quantitative validation measures are available as well. By providing (i) access to uniformly curated and preprocessed data, (ii) a collection of techniques to remove the batch effects between data sets from different sources, and (iii) several validation tools enabling the inspection of the integration process, these packages enable researchers to fully explore the potential of combining gene expression data for downstream analysis. The power of using both packages is demonstrated by programmatically retrieving and integrating gene expression studies from the InSilico DB repository [https://insilicodb.org/app/].
Lora, Antonio; Cosentino, Ugo; Gandini, Anna; Zocchetti, Carlo
2007-01-01
The treatment of schizophrenic disorders is the most important challenge for community care. The analysis focuses on packages of care provided to 23.602 patients with a ICD-10 diagnosis of schizophrenic disorder and treated in 2001 by the Departments of Mental Health in Lombardy, Italy. Packages of care refer to a mix of treatments provided to each patient during the year by different settings. Direct costs of the packages were calculated. Linear Discriminant Analysis has been used to link socio-demographic and diagnostic sub-groups of the patients to packages of care. People with schizophrenic disorders received relatively few care packages: only four packages involved more than 5%. Two thirds of the patients received only care provided by Community Mental Health Centres. In the other two packages with a percentage over 5%, the activity was provided by CMHCs, jointly with General Hospitals or Day Care Facilities. Complex care packages were rare (only 6%). As well as the intensity, also the variety of care provided by CMHCs increased with the complexity of care packages. In Lombardy more than half of the resources were spent for schizophrenia. The range of the costs per package was very wide. LDA failed to link characteristics of the patients to packages of care. Care packages are useful tools to understand better how mental health system works, how resources have been spent and to point out problems in the quality of care.
Documentation of operational protocol for the use of MAMA software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwartz, Daniel S.
2016-01-21
Image analysis of Scanning Electron Microscope (SEM) micrographs is a complex process that can vary significantly between analysts. The factors causing the variation are numerous, and the purpose of Task 2b is to develop and test a set of protocols designed to minimize variation in image analysis between different analysts and laboratories, specifically using the MAMA software package, Version 2.1. The protocols were designed to be “minimally invasive”, so that expert SEM operators will not be overly constrained in the way they analyze particle samples. The protocols will be tested using a round-robin approach where results from expert SEM usersmore » at Los Alamos National Laboratory, Lawrence Livermore National Laboratory, Pacific Northwest National Laboratory, Savannah River National Laboratory, and the National Institute of Standards and Testing will be compared. The variation of the results will be used to quantify uncertainty in the particle image analysis process. The round-robin exercise will proceed with 3 levels of rigor, each with their own set of protocols, as described below in Tasks 2b.1, 2b.2, and 2b.3. The uncertainty will be developed using NIST standard reference material SRM 1984 “Thermal Spray Powder – Particle Size Distribution, Tungsten Carbide/Cobalt (Acicular)” [Reference 1]. Full details are available in the Certificate of Analysis, posted on the NIST website (http://www.nist.gov/srm/).« less
The gputools package enables GPU computing in R.
Buckner, Joshua; Wilson, Justin; Seligman, Mark; Athey, Brian; Watson, Stanley; Meng, Fan
2010-01-01
By default, the R statistical environment does not make use of parallelism. Researchers may resort to expensive solutions such as cluster hardware for large analysis tasks. Graphics processing units (GPUs) provide an inexpensive and computationally powerful alternative. Using R and the CUDA toolkit from Nvidia, we have implemented several functions commonly used in microarray gene expression analysis for GPU-equipped computers. R users can take advantage of the better performance provided by an Nvidia GPU. The package is available from CRAN, the R project's repository of packages, at http://cran.r-project.org/web/packages/gputools More information about our gputools R package is available at http://brainarray.mbni.med.umich.edu/brainarray/Rgpgpu
Validation Test Report for the Automated Optical Processing System (AOPS) Version 4.12
2015-09-03
NPP) with the VIIRS sensor package as well as data from the Geostationary Ocean Color Imager (GOCI) sensor, aboard the Communication Ocean and...capability • Prepare the NRT Geostationary Ocean Color Imager (GOCI) data stream for integration into operations. • Improvements in sensor...Navy (DON) Environmental Data Records (EDRs) Expeditionary Warfare (EXW) Geostationary Ocean Color Imager (GOCI) Gulf of Mexico (GOM) Hierarchical
Structured Forms Reference Set of Binary Images II (SFRS2)
National Institute of Standards and Technology Data Gateway
NIST Structured Forms Reference Set of Binary Images II (SFRS2) (Web, free access) The second NIST database of structured forms (Special Database 6) consists of 5,595 pages of binary, black-and-white images of synthesized documents containing hand-print. The documents in this database are 12 different tax forms with the IRS 1040 Package X for the year 1988.
OpenMSI: A High-Performance Web-Based Platform for Mass Spectrometry Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubel, Oliver; Greiner, Annette; Cholia, Shreyas
Mass spectrometry imaging (MSI) enables researchers to directly probe endogenous molecules directly within the architecture of the biological matrix. Unfortunately, efficient access, management, and analysis of the data generated by MSI approaches remain major challenges to this rapidly developing field. Despite the availability of numerous dedicated file formats and software packages, it is a widely held viewpoint that the biggest challenge is simply opening, sharing, and analyzing a file without loss of information. Here we present OpenMSI, a software framework and platform that addresses these challenges via an advanced, high-performance, extensible file format and Web API for remote data accessmore » (http://openmsi.nersc.gov). The OpenMSI file format supports storage of raw MSI data, metadata, and derived analyses in a single, self-describing format based on HDF5 and is supported by a large range of analysis software (e.g., Matlab and R) and programming languages (e.g., C++, Fortran, and Python). Careful optimization of the storage layout of MSI data sets using chunking, compression, and data replication accelerates common, selective data access operations while minimizing data storage requirements and are critical enablers of rapid data I/O. The OpenMSI file format has shown to provide >2000-fold improvement for image access operations, enabling spectrum and image retrieval in less than 0.3 s across the Internet even for 50 GB MSI data sets. To make remote high-performance compute resources accessible for analysis and to facilitate data sharing and collaboration, we describe an easy-to-use yet powerful Web API, enabling fast and convenient access to MSI data, metadata, and derived analysis results stored remotely to facilitate high-performance data analysis and enable implementation of Web based data sharing, visualization, and analysis.« less
Differential maneuvering simulator data reduction and analysis software
NASA Technical Reports Server (NTRS)
Beasley, G. P.; Sigman, R. S.
1972-01-01
A multielement data reduction and analysis software package has been developed for use with the Langley differential maneuvering simulator (DMS). This package, which has several independent elements, was developed to support all phases of DMS aircraft simulation studies with a variety of both graphical and tabular information. The overall software package is considered unique because of the number, diversity, and sophistication of the element programs available for use in a single study. The purpose of this paper is to discuss the overall DMS data reduction and analysis package by reviewing the development of the various elements of the software, showing typical results that can be obtained, and discussing how each element can be used.
Detecting Multi-scale Structures in Chandra Images of Centaurus A
NASA Astrophysics Data System (ADS)
Karovska, M.; Fabbiano, G.; Elvis, M. S.; Evans, I. N.; Kim, D. W.; Prestwich, A. H.; Schwartz, D. A.; Murray, S. S.; Forman, W.; Jones, C.; Kraft, R. P.; Isobe, T.; Cui, W.; Schreier, E. J.
1999-12-01
Centaurus A (NGC 5128) is a giant early-type galaxy with a merger history, containing the nearest radio-bright AGN. Recent Chandra High Resolution Camera (HRC) observations of Cen A reveal X-ray multi-scale structures in this object with unprecedented detail and clarity. We show the results of an analysis of the Chandra data with smoothing and edge enhancement techniques that allow us to enhance and quantify the multi-scale structures present in the HRC images. These techniques include an adaptive smoothing algorithm (Ebeling et al 1999), and a multi-directional gradient detection algorithm (Karovska et al 1994). The Ebeling et al adaptive smoothing algorithm, which is incorporated in the CXC analysis s/w package, is a powerful tool for smoothing images containing complex structures at various spatial scales. The adaptively smoothed images of Centaurus A show simultaneously the high-angular resolution bright structures at scales as small as an arcsecond and the extended faint structures as large as several arc minutes. The large scale structures suggest complex symmetry, including a component possibly associated with the inner radio lobes (as suggested by the ROSAT HRI data, Dobereiner et al 1996), and a separate component with an orthogonal symmetry that may be associated with the galaxy as a whole. The dust lane and the x-ray ridges are very clearly visible. The adaptively smoothed images and the edge-enhanced images also suggest several filamentary features including a large filament-like structure extending as far as about 5 arcminutes to North-West.
Experimental Approaches to Study Genome Packaging of Influenza A Viruses.
Isel, Catherine; Munier, Sandie; Naffakh, Nadia
2016-08-09
The genome of influenza A viruses (IAV) consists of eight single-stranded negative sense viral RNAs (vRNAs) encapsidated into viral ribonucleoproteins (vRNPs). It is now well established that genome packaging (i.e., the incorporation of a set of eight distinct vRNPs into budding viral particles), follows a specific pathway guided by segment-specific cis-acting packaging signals on each vRNA. However, the precise nature and function of the packaging signals, and the mechanisms underlying the assembly of vRNPs into sub-bundles in the cytoplasm and their selective packaging at the viral budding site, remain largely unknown. Here, we review the diverse and complementary methods currently being used to elucidate these aspects of the viral cycle. They range from conventional and competitive reverse genetics, single molecule imaging of vRNPs by fluorescence in situ hybridization (FISH) and high-resolution electron microscopy and tomography of budding viral particles, to solely in vitro approaches to investigate vRNA-vRNA interactions at the molecular level.
A reduction package for cross-dispersed echelle spectrograph data in IDL
NASA Astrophysics Data System (ADS)
Hall, Jeffrey C.; Neff, James E.
1992-12-01
We have written in IDL a data reduction package that performs reduction and extraction of cross-dispersed echelle spectrograph data. The present package includes a complete set of tools for extracting data from any number of spectral orders with arbitrary tilt and curvature. Essential elements include debiasing and flatfielding of the raw CCD image, removal of scattered light background, either nonoptimal or optimal extraction of data, and wavelength calibration and continuum normalization of the extracted orders. A growing set of support routines permits examination of the frame being processed to provide continuing checks on the statistical properties of the data and on the accuracy of the extraction. We will display some sample reductions and discuss the algorithms used. The inherent simplicity and user-friendliness of the IDL interface make this package a useful tool for spectroscopists. We will provide an email distribution list for those interested in receiving the package, and further documentation will be distributed at the meeting.
Learning Photogrammetry with Interactive Software Tool PhoX
NASA Astrophysics Data System (ADS)
Luhmann, T.
2016-06-01
Photogrammetry is a complex topic in high-level university teaching, especially in the fields of geodesy, geoinformatics and metrology where high quality results are demanded. In addition, more and more black-box solutions for 3D image processing and point cloud generation are available that generate nice results easily, e.g. by structure-from-motion approaches. Within this context, the classical approach of teaching photogrammetry (e.g. focusing on aerial stereophotogrammetry) has to be reformed in order to educate students and professionals with new topics and provide them with more information behind the scene. Since around 20 years photogrammetry courses at the Jade University of Applied Sciences in Oldenburg, Germany, include the use of digital photogrammetry software that provide individual exercises, deep analysis of calculation results and a wide range of visualization tools for almost all standard tasks in photogrammetry. During the last years the software package PhoX has been developed that is part of a new didactic concept in photogrammetry and related subjects. It also serves as analysis tool in recent research projects. PhoX consists of a project-oriented data structure for images, image data, measured points and features and 3D objects. It allows for almost all basic photogrammetric measurement tools, image processing, calculation methods, graphical analysis functions, simulations and much more. Students use the program in order to conduct predefined exercises where they have the opportunity to analyse results in a high level of detail. This includes the analysis of statistical quality parameters but also the meaning of transformation parameters, rotation matrices, calibration and orientation data. As one specific advantage, PhoX allows for the interactive modification of single parameters and the direct view of the resulting effect in image or object space.
Rapid Global Fitting of Large Fluorescence Lifetime Imaging Microscopy Datasets
Warren, Sean C.; Margineanu, Anca; Alibhai, Dominic; Kelly, Douglas J.; Talbot, Clifford; Alexandrov, Yuriy; Munro, Ian; Katan, Matilda
2013-01-01
Fluorescence lifetime imaging (FLIM) is widely applied to obtain quantitative information from fluorescence signals, particularly using Förster Resonant Energy Transfer (FRET) measurements to map, for example, protein-protein interactions. Extracting FRET efficiencies or population fractions typically entails fitting data to complex fluorescence decay models but such experiments are frequently photon constrained, particularly for live cell or in vivo imaging, and this leads to unacceptable errors when analysing data on a pixel-wise basis. Lifetimes and population fractions may, however, be more robustly extracted using global analysis to simultaneously fit the fluorescence decay data of all pixels in an image or dataset to a multi-exponential model under the assumption that the lifetime components are invariant across the image (dataset). This approach is often considered to be prohibitively slow and/or computationally expensive but we present here a computationally efficient global analysis algorithm for the analysis of time-correlated single photon counting (TCSPC) or time-gated FLIM data based on variable projection. It makes efficient use of both computer processor and memory resources, requiring less than a minute to analyse time series and multiwell plate datasets with hundreds of FLIM images on standard personal computers. This lifetime analysis takes account of repetitive excitation, including fluorescence photons excited by earlier pulses contributing to the fit, and is able to accommodate time-varying backgrounds and instrument response functions. We demonstrate that this global approach allows us to readily fit time-resolved fluorescence data to complex models including a four-exponential model of a FRET system, for which the FRET efficiencies of the two species of a bi-exponential donor are linked, and polarisation-resolved lifetime data, where a fluorescence intensity and bi-exponential anisotropy decay model is applied to the analysis of live cell homo-FRET data. A software package implementing this algorithm, FLIMfit, is available under an open source licence through the Open Microscopy Environment. PMID:23940626
DOE Office of Scientific and Technical Information (OSTI.GOV)
Temple, Brian Allen; Armstrong, Jerawan Chudoung
This document is a mid-year report on a deliverable for the PYTHON Radiography Analysis Tool (PyRAT) for project LANL12-RS-107J in FY15. The deliverable is deliverable number 2 in the work package and is titled “Add the ability to read in more types of image file formats in PyRAT”. Right now PyRAT can only read in uncompressed TIF files (tiff files). It is planned to expand the file formats that can be read by PyRAT, making it easier to use in more situations. A summary of the file formats added include jpeg, jpg, png and formatted ASCII files.
Toolkit for testing scientific CCD cameras
NASA Astrophysics Data System (ADS)
Uzycki, Janusz; Mankiewicz, Lech; Molak, Marcin; Wrochna, Grzegorz
2006-03-01
The CCD Toolkit (1) is a software tool for testing CCD cameras which allows to measure important characteristics of a camera like readout noise, total gain, dark current, 'hot' pixels, useful area, etc. The application makes a statistical analysis of images saved in files with FITS format, commonly used in astronomy. A graphical interface is based on the ROOT package, which offers high functionality and flexibility. The program was developed in a way to ensure future compatibility with different operating systems: Windows and Linux. The CCD Toolkit was created for the "Pie of the Sky" project collaboration (2).
Software For Tie-Point Registration Of SAR Data
NASA Technical Reports Server (NTRS)
Rignot, Eric; Dubois, Pascale; Okonek, Sharon; Van Zyl, Jacob; Burnette, Fred; Borgeaud, Maurice
1995-01-01
SAR-REG software package registers synthetic-aperture-radar (SAR) image data to common reference frame based on manual tie-pointing. Image data can be in binary, integer, floating-point, or AIRSAR compressed format. For example, with map of soil characteristics, vegetation map, digital elevation map, or SPOT multispectral image, as long as user can generate binary image to be used by tie-pointing routine and data are available in one of the previously mentioned formats. Written in FORTRAN 77.
New SOFRADIR 10μm pixel pitch infrared products
NASA Astrophysics Data System (ADS)
Lefoul, X.; Pere-Laperne, N.; Augey, T.; Rubaldo, L.; Aufranc, Sébastien; Decaens, G.; Ricard, N.; Mazaleyrat, E.; Billon-Lanfrey, D.; Gravrand, Olivier; Bisotto, Sylvette
2014-10-01
Recent advances in miniaturization of IR imaging technology have led to a growing market for mini thermal-imaging sensors. In that respect, Sofradir development on smaller pixel pitch has made much more compact products available to the users. When this competitive advantage is mixed with smaller coolers, made possible by HOT technology, we achieved valuable reductions in the size, weight and power of the overall package. At the same time, we are moving towards a global offer based on digital interfaces that provides our customers simplifications at the IR system design process while freeing up more space. This paper discusses recent developments on hot and small pixel pitch technologies as well as efforts made on compact packaging solution developed by SOFRADIR in collaboration with CEA-LETI.
The Role of Packaging in Solid Waste Management 1966 to 1976.
ERIC Educational Resources Information Center
Darnay, Arsen; Franklin, William E.
The goals of waste processors and packagers obviously differ: the packaging industry seeks durable container material that will be unimpaired by external factors. Until recently, no systematic analysis of the relationship between packaging and solid waste disposal had been undertaken. This three-part document defines these interactions, and the…
Cyrface: An interface from Cytoscape to R that provides a user interface to R packages.
Gonçalves, Emanuel; Mirlach, Franz; Saez-Rodriguez, Julio
2013-01-01
There is an increasing number of software packages to analyse biological experimental data in the R environment. In particular, Bioconductor, a repository of curated R packages, is one of the most comprehensive resources for bioinformatics and biostatistics. The use of these packages is increasing, but it requires a basic understanding of the R language, as well as the syntax of the specific package used. The availability of user graphical interfaces for these packages would decrease the learning curve and broaden their application. Here, we present a Cytoscape app termed Cyrface that allows Cytoscape apps to connect to any function and package developed in R. Cyrface can be used to run R packages from within the Cytoscape environment making use of a graphical user interface. Moreover, it can link R packages with the capabilities of Cytoscape and its apps, in particular network visualization and analysis. Cyrface's utility has been demonstrated for two Bioconductor packages ( CellNOptR and DrugVsDisease), and here we further illustrate its usage by implementing a workflow of data analysis and visualization. Download links, installation instructions and user guides can be accessed from the Cyrface's homepage ( http://www.ebi.ac.uk/saezrodriguez/cyrface/) and from the Cytoscape app store ( http://apps.cytoscape.org/apps/cyrface).
Analysis of Variance: What Is Your Statistical Software Actually Doing?
ERIC Educational Resources Information Center
Li, Jian; Lomax, Richard G.
2011-01-01
Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…
49 CFR 109.9 - Transportation for examination and analysis.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 2 2011-10-01 2011-10-01 false Transportation for examination and analysis. 109.9... analysis. (a) An agent may direct a package to be transported to a facility for examination and analysis... the package conforms to subchapter C of this chapter; (2) Conflicting information concerning the...
Learn by Yourself: The Self-Learning Tools for Qualitative Analysis Software Packages
ERIC Educational Resources Information Center
Freitas, Fábio; Ribeiro, Jaime; Brandão, Catarina; Reis, Luís Paulo; de Souza, Francislê Neri; Costa, António Pedro
2017-01-01
Computer Assisted Qualitative Data Analysis Software (CAQDAS) are tools that help researchers to develop qualitative research projects. These software packages help the users with tasks such as transcription analysis, coding and text interpretation, writing and annotation, content search and analysis, recursive abstraction, grounded theory…
Resilience Among Students at the Basic Enlisted Submarine School
2016-12-01
reported resilience. The Hayes’ Macro in the Statistical Package for the Social Sciences (SSPS) was used to uncover factors relevant to mediation analysis... Statistical Package for the Social Sciences (SPSS) was used to uncover factors relevant to mediation analysis. Findings suggest that the encouragement of...to Stressful Experiences Scale RTC Recruit Training Command SPSS Statistical Package for the Social Sciences SS Social Support SWB Subjective Well
White, Victoria; Williams, Tahlia; Wakefield, Melanie
2015-01-01
Objective To examine the impact of plain packaging of cigarettes with enhanced graphic health warnings on adolescents’ perceptions of pack image and perceived brand differences. Methods Cross-sectional school-based surveys conducted in 2011 (prior to introduction of new cigarette packaging) and in 2013 (7–12 months afterwards). Students aged 12–17 years (2011 n=6338; 2013 n=5915) indicated whether they had seen a cigarette pack in previous 6 months. Students rated the character of four popular cigarette brands, indicated level of agreement regarding differences between brands in ease of smoking, quitting, addictiveness, harmfulness and look of pack; and indicated positive and negative perceptions of pack image. Changes in responses of students seeing cigarette packs in the previous 6 months (2011: 60%; 2013: 65%) were examined. Results Positive character ratings for each brand reduced significantly between 2011 and 2013. Changes were found for four of five statements reflecting brand differences. Significantly fewer students in 2013 than 2011 agreed that ‘some brands have better looking packs than others’ (2011: 43%; 2013: 25%, p<0.001), with larger decreases found among smokers (interaction p<0.001). Packs were rated less positively and more negatively in 2013 than in 2011 (p<0.001). The decrease in positive image ratings was greater among smokers. Conclusions The introduction of standardised packaging has reduced the appeal of cigarette packs. Further research could determine if continued exposure to standardised packs creates more uncertainty or disagreement regarding brand differences in ease of smoking and quitting, perceived addictiveness and harms. PMID:28407611
Reengineering Workflow for Curation of DICOM Datasets.
Bennett, William; Smith, Kirk; Jarosz, Quasar; Nolan, Tracy; Bosch, Walter
2018-06-15
Reusable, publicly available data is a pillar of open science and rapid advancement of cancer imaging research. Sharing data from completed research studies not only saves research dollars required to collect data, but also helps insure that studies are both replicable and reproducible. The Cancer Imaging Archive (TCIA) is a global shared repository for imaging data related to cancer. Insuring the consistency, scientific utility, and anonymity of data stored in TCIA is of utmost importance. As the rate of submission to TCIA has been increasing, both in volume and complexity of DICOM objects stored, the process of curation of collections has become a bottleneck in acquisition of data. In order to increase the rate of curation of image sets, improve the quality of the curation, and better track the provenance of changes made to submitted DICOM image sets, a custom set of tools was developed, using novel methods for the analysis of DICOM data sets. These tools are written in the programming language perl, use the open-source database PostgreSQL, make use of the perl DICOM routines in the open-source package Posda, and incorporate DICOM diagnostic tools from other open-source packages, such as dicom3tools. These tools are referred to as the "Posda Tools." The Posda Tools are open source and available via git at https://github.com/UAMS-DBMI/PosdaTools . In this paper, we briefly describe the Posda Tools and discuss the novel methods employed by these tools to facilitate rapid analysis of DICOM data, including the following: (1) use a database schema which is more permissive, and differently normalized from traditional DICOM databases; (2) perform integrity checks automatically on a bulk basis; (3) apply revisions to DICOM datasets on an bulk basis, either through a web-based interface or via command line executable perl scripts; (4) all such edits are tracked in a revision tracker and may be rolled back; (5) a UI is provided to inspect the results of such edits, to verify that they are what was intended; (6) identification of DICOM Studies, Series, and SOP instances using "nicknames" which are persistent and have well-defined scope to make expression of reported DICOM errors easier to manage; and (7) rapidly identify potential duplicate DICOM datasets by pixel data is provided; this can be used, e.g., to identify submission subjects which may relate to the same individual, without identifying the individual.
MTF analysis using lunar observations for Himawari-8/AHI
NASA Astrophysics Data System (ADS)
Keller, Graziela R.; Chang, Tiejun; Xiong, Xiaoxiong
2017-09-01
The modulation transfer function, or MTF, is a common measure of image fidelity, which has been historically characterized on-orbit using high contrast images of the lunar limb obtained by remote sensing instruments onboard both low-orbit and geostationary satellites. Himawari-8, launched in 2014, is a Japanese geostationary satellite that carries the Advanced Himawari Imager (AHI), a near-identical copy of the Advanced Baseline Imager (ABI) instrument onboard the GOES-16 satellite. In this paper, we apply a variation of the slantededge method for deriving the MTF from lunar images, first verified by us on simulated test images, to the Himawari-8/AHI L1A and L1B data. The MTF is derived along the North/South and East/West directions separately. The AHI L1A images used in the characterization of the MTF are obtained from lunar observations routinely acquired for validating the radiometric calibration. The L1B data, which is spatially re-sampled, come from serendipitous lunar observations where the Moon appears close to the Earth's disk. We developed and implemented an algorithm to identify such occurrences using the SPICE/Icy package to predict the times where the Moon is visible in the L1B imagery and demonstrate their use for MTF derivation.
A system for beach video-monitoring: Beachkeeper plus
NASA Astrophysics Data System (ADS)
Brignone, Massimo; Schiaffino, Chiara F.; Isla, Federico I.; Ferrari, Marco
2012-12-01
A suitable knowledge of coastal systems, of their morphodynamic characteristics and their response to storm events and man-made structures is essential for littoral conservation and management. Nowadays webcams represent a useful device to obtain information from beaches. Video-monitoring techniques are generally site specific and softwares working with any image acquisition system are rare. Therefore, this work aims at submitting theory and applications of an experimental video monitoring software: Beachkeeper plus, a freeware non-profit software, can be employed and redistributed without modifications. A license file is provided inside software package and in the user guide. Beachkeeper plus is based on Matlab® and it can be used for the analysis of images and photos coming from any kind of acquisition system (webcams, digital cameras or images downloaded from internet), without any a-priori information or laboratory study of the acquisition system itself. Therefore, it could become a useful tool for beach planning. Through a simple guided interface, images can be analyzed by performing georeferentiation, rectification, averaging and variance. This software was initially operated in Pietra Ligure (Italy), using images from a tourist webcam, and in Mar del Plata (Argentina) using images from a digital camera. In both cases the reliability in different geomorphologic and morphodynamic conditions was confirmed by the good quality of obtained images after georeferentiation, rectification and averaging.
Klemm, Matthias; Schweitzer, Dietrich; Peters, Sven; Sauer, Lydia; Hammer, Martin; Haueisen, Jens
2015-01-01
Fluorescence lifetime imaging ophthalmoscopy (FLIO) is a new technique for measuring the in vivo autofluorescence intensity decays generated by endogenous fluorophores in the ocular fundus. Here, we present a software package called FLIM eXplorer (FLIMX) for analyzing FLIO data. Specifically, we introduce a new adaptive binning approach as an optimal tradeoff between the spatial resolution and the number of photons required per pixel. We also expand existing decay models (multi-exponential, stretched exponential, spectral global analysis, incomplete decay) to account for the layered structure of the eye and present a method to correct for the influence of the crystalline lens fluorescence on the retina fluorescence. Subsequently, the Holm-Bonferroni method is applied to FLIO measurements to allow for group comparisons between patients and controls on the basis of fluorescence lifetime parameters. The performance of the new approaches was evaluated in five experiments. Specifically, we evaluated static and adaptive binning in a diabetes mellitus patient, we compared the different decay models in a healthy volunteer and performed a group comparison between diabetes patients and controls. An overview of the visualization capabilities and a comparison of static and adaptive binning is shown for a patient with macular hole. FLIMX's applicability to fluorescence lifetime imaging microscopy is shown in the ganglion cell layer of a porcine retina sample, obtained by a laser scanning microscope using two-photon excitation.
NASA Technical Reports Server (NTRS)
Ramesham, Rajeshuni
2012-01-01
This paper provides the experimental test results of advanced CCGA packages tested in extreme temperature thermal environments. Standard optical inspection and x-ray non-destructive inspection tools were used to assess the reliability of high density CCGA packages for deep space extreme temperature missions. Ceramic column grid array (CCGA) packages have been increasing in use based on their advantages such as high interconnect density, very good thermal and electrical performances, compatibility with standard surface-mount packaging assembly processes, and so on. CCGA packages are used in space applications such as in logic and microprocessor functions, telecommunications, payload electronics, and flight avionics. As these packages tend to have less solder joint strain relief than leaded packages or more strain relief over lead-less chip carrier packages, the reliability of CCGA packages is very important for short-term and long-term deep space missions. We have employed high density CCGA 1152 and 1272 daisy chained electronic packages in this preliminary reliability study. Each package is divided into several daisy-chained sections. The physical dimensions of CCGA1152 package is 35 mm x 35 mm with a 34 x 34 array of columns with a 1 mm pitch. The dimension of the CCGA1272 package is 37.5 mm x 37.5 mm with a 36 x 36 array with a 1 mm pitch. The columns are made up of 80% Pb/20%Sn material. CCGA interconnect electronic package printed wiring polyimide boards have been assembled and inspected using non-destructive x-ray imaging techniques. The assembled CCGA boards were subjected to extreme temperature thermal atmospheric cycling to assess their reliability for future deep space missions. The resistance of daisy-chained interconnect sections were monitored continuously during thermal cycling. This paper provides the experimental test results of advanced CCGA packages tested in extreme temperature thermal environments. Standard optical inspection and x-ray non-destructive inspection tools were used to assess the reliability of high density CCGA packages for deep space extreme temperature missions. Keywords: Extreme temperatures, High density CCGA qualification, CCGA reliability, solder joint failures, optical inspection, and x-ray inspection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, M
2009-03-06
This Technical Review Report (TRR) documents the review, performed by Lawrence Livermore National Laboratory (LLNL) Staff, at the request of the Department of Energy (DOE), on the 'Safety Analysis Report for Packaging (SARP), Model 9978 B(M)F-96', Revision 1, March 2009 (S-SARP-G-00002). The Model 9978 Package complies with 10 CFR 71, and with 'Regulations for the Safe Transport of Radioactive Material-1996 Edition (As Amended, 2000)-Safety Requirements', International Atomic Energy Agency (IAEA) Safety Standards Series No. TS-R-1. The Model 9978 Packaging is designed, analyzed, fabricated, and tested in accordance with Section III of the American Society of Mechanical Engineers Boiler and Pressuremore » Vessel Code (ASME B&PVC). The review presented in this TRR was performed using the methods outlined in Revision 3 of the DOE's 'Packaging Review Guide (PRG) for Reviewing Safety Analysis Reports for Packages'. The format of the SARP follows that specified in Revision 2 of the Nuclear Regulatory Commission's Regulatory Guide 7.9, i.e., 'Standard Format and Content of Part 71 Applications for Approval of Packages for Radioactive Material'. Although the two documents are similar in their content, they are not identical. Formatting differences have been noted in this TRR, where appropriate. The Model 9978 Packaging is a single containment package, using a 5-inch containment vessel (5CV). It uses a nominal 35-gallon drum package design. In comparison, the Model 9977 Packaging uses a 6-inch containment vessel (6CV). The Model 9977 and Model 9978 Packagings were developed concurrently, and they were referred to as the General Purpose Fissile Material Package, Version 1 (GPFP). Both packagings use General Plastics FR-3716 polyurethane foam as insulation and as impact limiters. The 5CV is used as the Primary Containment Vessel (PCV) in the Model 9975-96 Packaging. The Model 9975-96 Packaging also has the 6CV as its Secondary Containment Vessel (SCV). In comparison, the Model 9975 Packagings use Celotex{trademark} for insulation and as impact limiters. To provide a historical perspective, it is noted that the Model 9975-96 Packaging is a 35-gallon drum package design that has evolved from a family of packages designed by DOE contractors at the Savannah River Site. Earlier package designs, i.e., the Model 9965, the Model 9966, the Model 9967, and the Model 9968 Packagings, were originally designed and certified in the early 1980s. In the 1990s, updated package designs that incorporated design features consistent with the then-newer safety requirements were proposed. The updated package designs at the time were the Model 9972, the Model 9973, the Model 9974, and the Model 9975 Packagings, respectively. The Model 9975 Package was certified by the Packaging Certification Program, under the Office of Safety Management and Operations. The Model 9978 Package has six Content Envelopes: C.1 ({sup 238}Pu Heat Sources), C.2 ( Pu/U Metals), C.3 (Pu/U Oxides, Reserved), C.4 (U Metal or Alloy), C.5 (U Compounds), and C.6 (Samples and Sources). Per 10 CFR 71.59 (Code of Federal Regulations), the value of N is 50 for the Model 9978 Package leading to a Criticality Safety Index (CSI) of 1.0. The Transport Index (TI), based on dose rate, is calculated to be a maximum of 4.1.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ushizima, Daniela; Perciano, Talita; Krishnan, Harinarayan
Fibers provide exceptional strength-to-weight ratio capabilities when woven into ceramic composites, transforming them into materials with exceptional resistance to high temperature, and high strength combined with improved fracture toughness. Microcracks are inevitable when the material is under strain, which can be imaged using synchrotron X-ray computed micro-tomography (mu-CT) for assessment of material mechanical toughness variation. An important part of this analysis is to recognize fibrillar features. This paper presents algorithms for detecting and quantifying composite cracks and fiber breaks from high-resolution image stacks. First, we propose recognition algorithms to identify the different structures of the composite, including matrix cracks andmore » fibers breaks. Second, we introduce our package F3D for fast filtering of large 3D imagery, implemented in OpenCL to take advantage of graphic cards. Results show that our algorithms automatically identify micro-damage and that the GPU-based implementation introduced here takes minutes, being 17x faster than similar tools on a typical image file.« less