Normative Databases for Imaging Instrumentation.
Realini, Tony; Zangwill, Linda M; Flanagan, John G; Garway-Heath, David; Patella, Vincent M; Johnson, Chris A; Artes, Paul H; Gaddie, Ian B; Fingeret, Murray
2015-08-01
To describe the process by which imaging devices undergo reference database development and regulatory clearance. The limitations and potential improvements of reference (normative) data sets for ophthalmic imaging devices will be discussed. A symposium was held in July 2013 in which a series of speakers discussed issues related to the development of reference databases for imaging devices. Automated imaging has become widely accepted and used in glaucoma management. The ability of such instruments to discriminate healthy from glaucomatous optic nerves, and to detect glaucomatous progression over time is limited by the quality of reference databases associated with the available commercial devices. In the absence of standardized rules governing the development of reference databases, each manufacturer's database differs in size, eligibility criteria, and ethnic make-up, among other key features. The process for development of imaging reference databases may be improved by standardizing eligibility requirements and data collection protocols. Such standardization may also improve the degree to which results may be compared between commercial instruments.
Normative Databases for Imaging Instrumentation
Realini, Tony; Zangwill, Linda; Flanagan, John; Garway-Heath, David; Patella, Vincent Michael; Johnson, Chris; Artes, Paul; Ben Gaddie, I.; Fingeret, Murray
2015-01-01
Purpose To describe the process by which imaging devices undergo reference database development and regulatory clearance. The limitations and potential improvements of reference (normative) data sets for ophthalmic imaging devices will be discussed. Methods A symposium was held in July 2013 in which a series of speakers discussed issues related to the development of reference databases for imaging devices. Results Automated imaging has become widely accepted and used in glaucoma management. The ability of such instruments to discriminate healthy from glaucomatous optic nerves, and to detect glaucomatous progression over time is limited by the quality of reference databases associated with the available commercial devices. In the absence of standardized rules governing the development of reference databases, each manufacturer’s database differs in size, eligibility criteria, and ethnic make-up, among other key features. Conclusions The process for development of imaging reference databases may be improved by standardizing eligibility requirements and data collection protocols. Such standardization may also improve the degree to which results may be compared between commercial instruments. PMID:25265003
Automated processing of shoeprint images based on the Fourier transform for use in forensic science.
de Chazal, Philip; Flynn, John; Reilly, Richard B
2005-03-01
The development of a system for automatically sorting a database of shoeprint images based on the outsole pattern in response to a reference shoeprint image is presented. The database images are sorted so that those from the same pattern group as the reference shoeprint are likely to be at the start of the list. A database of 476 complete shoeprint images belonging to 140 pattern groups was established with each group containing two or more examples. A panel of human observers performed the grouping of the images into pattern categories. Tests of the system using the database showed that the first-ranked database image belongs to the same pattern category as the reference image 65 percent of the time and that a correct match appears within the first 5 percent of the sorted images 87 percent of the time. The system has translational and rotational invariance so that the spatial positioning of the reference shoeprint images does not have to correspond with the spatial positioning of the shoeprint images of the database. The performance of the system for matching partial-prints was also determined.
Structured Forms Reference Set of Binary Images (SFRS)
National Institute of Standards and Technology Data Gateway
NIST Structured Forms Reference Set of Binary Images (SFRS) (Web, free access) The NIST Structured Forms Database (Special Database 2) consists of 5,590 pages of binary, black-and-white images of synthesized documents. The documents in this database are 12 different tax forms from the IRS 1040 Package X for the year 1988.
Structured Forms Reference Set of Binary Images II (SFRS2)
National Institute of Standards and Technology Data Gateway
NIST Structured Forms Reference Set of Binary Images II (SFRS2) (Web, free access) The second NIST database of structured forms (Special Database 6) consists of 5,595 pages of binary, black-and-white images of synthesized documents containing hand-print. The documents in this database are 12 different tax forms with the IRS 1040 Package X for the year 1988.
Techniques of Photometry and Astrometry with APASS, Gaia, and Pan-STARRs Results (Abstract)
NASA Astrophysics Data System (ADS)
Green, W.
2017-12-01
(Abstract only) The databases with the APASS DR9, Gaia DR1, and the Pan-STARRs 3pi DR1 data releases are publicly available for use. There is a bit of data-mining involved to download and manage these reference stars. This paper discusses the use of these databases to acquire accurate photometric references as well as techniques for improving results. Images are prepared in the usual way: zero, dark, flat-fields, and WCS solutions with Astrometry.net. Images are then processed with Sextractor to produce an ASCII table of identifying photometric features. The database manages photometics catalogs and images converted to ASCII tables. Scripts convert the files into SQL and assimilate them into database tables. Using SQL techniques, each image star is merged with reference data to produce publishable results. The VYSOS has over 13,000 images of the ONC5 field to process with roughly 100 total fields in the campaign. This paper provides the overview for this daunting task.
Toward a standard reference database for computer-aided mammography
NASA Astrophysics Data System (ADS)
Oliveira, Júlia E. E.; Gueld, Mark O.; de A. Araújo, Arnaldo; Ott, Bastian; Deserno, Thomas M.
2008-03-01
Because of the lack of mammography databases with a large amount of codified images and identified characteristics like pathology, type of breast tissue, and abnormality, there is a problem for the development of robust systems for computer-aided diagnosis. Integrated to the Image Retrieval in Medical Applications (IRMA) project, we present an available mammography database developed from the union of: The Mammographic Image Analysis Society Digital Mammogram Database (MIAS), The Digital Database for Screening Mammography (DDSM), the Lawrence Livermore National Laboratory (LLNL), and routine images from the Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen. Using the IRMA code, standardized coding of tissue type, tumor staging, and lesion description was developed according to the American College of Radiology (ACR) tissue codes and the ACR breast imaging reporting and data system (BI-RADS). The import was done automatically using scripts for image download, file format conversion, file name, web page and information file browsing. Disregarding the resolution, this resulted in a total of 10,509 reference images, and 6,767 images are associated with an IRMA contour information feature file. In accordance to the respective license agreements, the database will be made freely available for research purposes, and may be used for image based evaluation campaigns such as the Cross Language Evaluation Forum (CLEF). We have also shown that it can be extended easily with further cases imported from a picture archiving and communication system (PACS).
Reduced reference image quality assessment via sub-image similarity based redundancy measurement
NASA Astrophysics Data System (ADS)
Mou, Xuanqin; Xue, Wufeng; Zhang, Lei
2012-03-01
The reduced reference (RR) image quality assessment (IQA) has been attracting much attention from researchers for its loyalty to human perception and flexibility in practice. A promising RR metric should be able to predict the perceptual quality of an image accurately while using as few features as possible. In this paper, a novel RR metric is presented, whose novelty lies in two aspects. Firstly, it measures the image redundancy by calculating the so-called Sub-image Similarity (SIS), and the image quality is measured by comparing the SIS between the reference image and the test image. Secondly, the SIS is computed by the ratios of NSE (Non-shift Edge) between pairs of sub-images. Experiments on two IQA databases (i.e. LIVE and CSIQ databases) show that by using only 6 features, the proposed metric can work very well with high correlations between the subjective and objective scores. In particular, it works consistently well across all the distortion types.
Noise Estimation and Quality Assessment of Gaussian Noise Corrupted Images
NASA Astrophysics Data System (ADS)
Kamble, V. M.; Bhurchandi, K.
2018-03-01
Evaluating the exact quantity of noise present in an image and quality of an image in the absence of reference image is a challenging task. We propose a near perfect noise estimation method and a no reference image quality assessment method for images corrupted by Gaussian noise. The proposed methods obtain initial estimate of noise standard deviation present in an image using the median of wavelet transform coefficients and then obtains a near to exact estimate using curve fitting. The proposed noise estimation method provides the estimate of noise within average error of +/-4%. For quality assessment, this noise estimate is mapped to fit the Differential Mean Opinion Score (DMOS) using a nonlinear function. The proposed methods require minimum training and yields the noise estimate and image quality score. Images from Laboratory for image and Video Processing (LIVE) database and Computational Perception and Image Quality (CSIQ) database are used for validation of the proposed quality assessment method. Experimental results show that the performance of proposed quality assessment method is at par with the existing no reference image quality assessment metric for Gaussian noise corrupted images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2011-02-15
Purpose: The development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process.more » Methods: Seven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC/IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories (''nodule{>=}3 mm,''''nodule<3 mm,'' and ''non-nodule{>=}3 mm''). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus. Results: The Database contains 7371 lesions marked ''nodule'' by at least one radiologist. 2669 of these lesions were marked ''nodule{>=}3 mm'' by at least one radiologist, of which 928 (34.7%) received such marks from all four radiologists. These 2669 lesions include nodule outlines and subjective nodule characteristic ratings. Conclusions: The LIDC/IDRI Database is expected to provide an essential medical imaging research resource to spur CAD development, validation, and dissemination in clinical practice.« less
Case retrieval in medical databases by fusing heterogeneous information.
Quellec, Gwénolé; Lamard, Mathieu; Cazuguel, Guy; Roux, Christian; Cochener, Béatrice
2011-01-01
A novel content-based heterogeneous information retrieval framework, particularly well suited to browse medical databases and support new generation computer aided diagnosis (CADx) systems, is presented in this paper. It was designed to retrieve possibly incomplete documents, consisting of several images and semantic information, from a database; more complex data types such as videos can also be included in the framework. The proposed retrieval method relies on image processing, in order to characterize each individual image in a document by their digital content, and information fusion. Once the available images in a query document are characterized, a degree of match, between the query document and each reference document stored in the database, is defined for each attribute (an image feature or a metadata). A Bayesian network is used to recover missing information if need be. Finally, two novel information fusion methods are proposed to combine these degrees of match, in order to rank the reference documents by decreasing relevance for the query. In the first method, the degrees of match are fused by the Bayesian network itself. In the second method, they are fused by the Dezert-Smarandache theory: the second approach lets us model our confidence in each source of information (i.e., each attribute) and take it into account in the fusion process for a better retrieval performance. The proposed methods were applied to two heterogeneous medical databases, a diabetic retinopathy database and a mammography screening database, for computer aided diagnosis. Precisions at five of 0.809 ± 0.158 and 0.821 ± 0.177, respectively, were obtained for these two databases, which is very promising.
No-reference quality assessment based on visual perception
NASA Astrophysics Data System (ADS)
Li, Junshan; Yang, Yawei; Hu, Shuangyan; Zhang, Jiao
2014-11-01
The visual quality assessment of images/videos is an ongoing hot research topic, which has become more and more important for numerous image and video processing applications with the rapid development of digital imaging and communication technologies. The goal of image quality assessment (IQA) algorithms is to automatically assess the quality of images/videos in agreement with human quality judgments. Up to now, two kinds of models have been used for IQA, namely full-reference (FR) and no-reference (NR) models. For FR models, IQA algorithms interpret image quality as fidelity or similarity with a perfect image in some perceptual space. However, the reference image is not available in many practical applications, and a NR IQA approach is desired. Considering natural vision as optimized by the millions of years of evolutionary pressure, many methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychological features of the human visual system (HVS). To reach this goal, researchers try to simulate HVS with image sparsity coding and supervised machine learning, which are two main features of HVS. A typical HVS captures the scenes by sparsity coding, and uses experienced knowledge to apperceive objects. In this paper, we propose a novel IQA approach based on visual perception. Firstly, a standard model of HVS is studied and analyzed, and the sparse representation of image is accomplished with the model; and then, the mapping correlation between sparse codes and subjective quality scores is trained with the regression technique of least squaresupport vector machine (LS-SVM), which gains the regressor that can predict the image quality; the visual metric of image is predicted with the trained regressor at last. We validate the performance of proposed approach on Laboratory for Image and Video Engineering (LIVE) database, the specific contents of the type of distortions present in the database are: 227 images of JPEG2000, 233 images of JPEG, 174 images of White Noise, 174 images of Gaussian Blur, 174 images of Fast Fading. The database includes subjective differential mean opinion score (DMOS) for each image. The experimental results show that the proposed approach not only can assess many kinds of distorted images quality, but also exhibits a superior accuracy and monotonicity.
National Institute of Standards and Technology Data Gateway
NIST Scoring Package (PC database for purchase) The NIST Scoring Package (Special Database 1) is a reference implementation of the draft Standard Method for Evaluating the Performance of Systems Intended to Recognize Hand-printed Characters from Image Data Scanned from Forms.
New model for distributed multimedia databases and its application to networking of museums
NASA Astrophysics Data System (ADS)
Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki
1998-02-01
This paper proposes a new distributed multimedia data base system where the databases storing MPEG-2 videos and/or super high definition images are connected together through the B-ISDN's, and also refers to an example of the networking of museums on the basis of the proposed database system. The proposed database system introduces a new concept of the 'retrieval manager' which functions an intelligent controller so that the user can recognize a set of image databases as one logical database. A user terminal issues a request to retrieve contents to the retrieval manager which is located in the nearest place to the user terminal on the network. Then, the retrieved contents are directly sent through the B-ISDN's to the user terminal from the server which stores the designated contents. In this case, the designated logical data base dynamically generates the best combination of such a retrieving parameter as a data transfer path referring to directly or data on the basis of the environment of the system. The generated retrieving parameter is then executed to select the most suitable data transfer path on the network. Therefore, the best combination of these parameters fits to the distributed multimedia database system.
Applications of terahertz spectroscopy and imaging
NASA Astrophysics Data System (ADS)
Zhang, Cunlin; Mu, Kaijun
2009-07-01
We have examined application feasibility of THz time-domain spectroscopy (THz-TDS) to inspect 30 kinds of illicit drugs, 20 kinds of amino acid and 10 kinds of explosives and related compounds (ERCs). We also have got their fingerprints, established the corresponding database, and propose the reference-free methods to extract the absorption or reflection spectra, respectively. We also use optical pump THz probe to research the ultrafast dynamics of semiconductor. While, we also present some new THz imaging techniques, such as, focal-plane multiwavelength phase imaging, reference-free phase imaging, polarization imaging, and continuous-wave (CW) standoff distance imaging.
Shao, Feng; Li, Kemeng; Lin, Weisi; Jiang, Gangyi; Yu, Mei; Dai, Qionghai
2015-10-01
Quality assessment of 3D images encounters more challenges than its 2D counterparts. Directly applying 2D image quality metrics is not the solution. In this paper, we propose a new full-reference quality assessment for stereoscopic images by learning binocular receptive field properties to be more in line with human visual perception. To be more specific, in the training phase, we learn a multiscale dictionary from the training database, so that the latent structure of images can be represented as a set of basis vectors. In the quality estimation phase, we compute sparse feature similarity index based on the estimated sparse coefficient vectors by considering their phase difference and amplitude difference, and compute global luminance similarity index by considering luminance changes. The final quality score is obtained by incorporating binocular combination based on sparse energy and sparse complexity. Experimental results on five public 3D image quality assessment databases demonstrate that in comparison with the most related existing methods, the devised algorithm achieves high consistency with subjective assessment.
Image database for digital hand atlas
NASA Astrophysics Data System (ADS)
Cao, Fei; Huang, H. K.; Pietka, Ewa; Gilsanz, Vicente; Dey, Partha S.; Gertych, Arkadiusz; Pospiech-Kurkowska, Sywia
2003-05-01
Bone age assessment is a procedure frequently performed in pediatric patients to evaluate their growth disorder. A commonly used method is atlas matching by a visual comparison of a hand radiograph with a small reference set of old Greulich-Pyle atlas. We have developed a new digital hand atlas with a large set of clinically normal hand images of diverse ethnic groups. In this paper, we will present our system design and implementation of the digital atlas database to support the computer-aided atlas matching for bone age assessment. The system consists of a hand atlas image database, a computer-aided diagnostic (CAD) software module for image processing and atlas matching, and a Web user interface. Users can use a Web browser to push DICOM images, directly or indirectly from PACS, to the CAD server for a bone age assessment. Quantitative features on the examined image, which reflect the skeletal maturity, are then extracted and compared with patterns from the atlas image database to assess the bone age. The digital atlas method built on a large image database and current Internet technology provides an alternative to supplement or replace the traditional one for a quantitative, accurate and cost-effective assessment of bone age.
Digital hand atlas and computer-aided bone age assessment via the Web
NASA Astrophysics Data System (ADS)
Cao, Fei; Huang, H. K.; Pietka, Ewa; Gilsanz, Vicente
1999-07-01
A frequently used assessment method of bone age is atlas matching by a radiological examination of a hand image against a reference set of atlas patterns of normal standards. We are in a process of developing a digital hand atlas with a large standard set of normal hand and wrist images that reflect the skeletal maturity, race and sex difference, and current child development. The digital hand atlas will be used for a computer-aided bone age assessment via Web. We have designed and partially implemented a computer-aided diagnostic (CAD) system for Web-based bone age assessment. The system consists of a digital hand atlas, a relational image database and a Web-based user interface. The digital atlas is based on a large standard set of normal hand an wrist images with extracted bone objects and quantitative features. The image database uses a content- based indexing to organize the hand images and their attributes and present to users in a structured way. The Web-based user interface allows users to interact with the hand image database from browsers. Users can use a Web browser to push a clinical hand image to the CAD server for a bone age assessment. Quantitative features on the examined image, which reflect the skeletal maturity, will be extracted and compared with patterns from the atlas database to assess the bone age. The relevant reference imags and the final assessment report will be sent back to the user's browser via Web. The digital atlas will remove the disadvantages of the currently out-of-date one and allow the bone age assessment to be computerized and done conveniently via Web. In this paper, we present the system design and Web-based client-server model for computer-assisted bone age assessment and our initial implementation of the digital atlas database.
Black, J A; Waggamon, K A
1992-01-01
An isoelectric focusing method using thin-layer agarose gel has been developed for wheat gliadin. Using flat-bed units with a third electrode, up to 72 samples per gel may be analyzed. Advantages over traditional acid polyacrylamide gel electrophoresis methodology include: faster run times, nontoxic media, and greater sample capacity. The method is suitable for fingerprinting or purity testing of wheat varieties. Using digital images captured by a flat-bed scanner, a 4-band reference system using isoelectric points was devised. Software enables separated bands to be assigned pI values based upon reference tracks. Precision of assigned isoelectric points is shown to be on the order of 0.02 pH units. Captured images may be stored in a computer database and compared to unknown patterns to enable an identification. Parameters for a match with a stored pattern may be adjusted for pI interval required for a match, and number of best matches.
Improving accuracy and power with transfer learning using a meta-analytic database.
Schwartz, Yannick; Varoquaux, Gaël; Pallier, Christophe; Pinel, Philippe; Poline, Jean-Baptiste; Thirion, Bertrand
2012-01-01
Typical cohorts in brain imaging studies are not large enough for systematic testing of all the information contained in the images. To build testable working hypotheses, investigators thus rely on analysis of previous work, sometimes formalized in a so-called meta-analysis. In brain imaging, this approach underlies the specification of regions of interest (ROIs) that are usually selected on the basis of the coordinates of previously detected effects. In this paper, we propose to use a database of images, rather than coordinates, and frame the problem as transfer learning: learning a discriminant model on a reference task to apply it to a different but related new task. To facilitate statistical analysis of small cohorts, we use a sparse discriminant model that selects predictive voxels on the reference task and thus provides a principled procedure to define ROIs. The benefits of our approach are twofold. First it uses the reference database for prediction, i.e., to provide potential biomarkers in a clinical setting. Second it increases statistical power on the new task. We demonstrate on a set of 18 pairs of functional MRI experimental conditions that our approach gives good prediction. In addition, on a specific transfer situation involving different scanners at different locations, we show that voxel selection based on transfer learning leads to higher detection power on small cohorts.
VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases
NASA Technical Reports Server (NTRS)
Roussopoulos, N.; Sellis, Timos
1992-01-01
One of biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental database access method, VIEWCACHE, provides such an interface for accessing distributed data sets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image data sets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate distributed database search.
The reference ballistic imaging database revisited.
De Ceuster, Jan; Dujardin, Sylvain
2015-03-01
A reference ballistic image database (RBID) contains images of cartridge cases fired in firearms that are in circulation: a ballistic fingerprint database. The performance of an RBID was investigated a decade ago by De Kinder et al. using IBIS(®) Heritage™ technology. The results of that study were published in this journal, issue 214. Since then, technologies have evolved quite significantly and novel apparatus have become available on the market. The current research article investigates the efficiency of another automated ballistic imaging system, Evofinder(®) using the same database as used by De Kinder et al. The results demonstrate a significant increase in correlation efficiency: 38% of all matches were on first position of the Evofinder correlation list in comparison to IBIS(®) Heritage™ where only 19% were on the first position. Average correlation times are comparable to the IBIS(®) Heritage™ system. While Evofinder(®) demonstrates specific improvement for mutually correlating different ammunition brands, ammunition dependence of the markings is still strongly influencing the correlation result because the markings may vary considerably. As a consequence a great deal of potential hits (36%) was still far down in the correlation lists (positions 31 and lower). The large database was used to examine the probability of finding a match as a function of correlation list verification. As an example, the RBID study on Evofinder(®) demonstrates that to find at least 90% of all potential matches, at least 43% of the items in the database need to be compared on screen and this for breech face markings and firing pin impression separately. These results, although a clear improvement to the original RBID study, indicate that the implementation of such a database should still not be considered nowadays. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Image sharpness assessment based on wavelet energy of edge area
NASA Astrophysics Data System (ADS)
Li, Jin; Zhang, Hong; Zhang, Lei; Yang, Yifan; He, Lei; Sun, Mingui
2018-04-01
Image quality assessment is needed in multiple image processing areas and blur is one of the key reasons of image deterioration. Although great full-reference image quality assessment metrics have been proposed in the past few years, no-reference method is still an area of current research. Facing this problem, this paper proposes a no-reference sharpness assessment method based on wavelet transformation which focuses on the edge area of image. Based on two simple characteristics of human vision system, weights are introduced to calculate weighted log-energy of each wavelet sub band. The final score is given by the ratio of high-frequency energy to the total energy. The algorithm is tested on multiple databases. Comparing with several state-of-the-art metrics, proposed algorithm has better performance and less runtime consumption.
VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases
NASA Technical Reports Server (NTRS)
Roussopoulos, N.; Sellis, Timos
1993-01-01
One of the biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental data base access method, VIEWCACHE, provides such an interface for accessing distributed datasets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image datasets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate database search.
Reference point detection for camera-based fingerprint image based on wavelet transformation.
Khalil, Mohammed S
2015-04-30
Fingerprint recognition systems essentially require core-point detection prior to fingerprint matching. The core-point is used as a reference point to align the fingerprint with a template database. When processing a larger fingerprint database, it is necessary to consider the core-point during feature extraction. Numerous core-point detection methods are available and have been reported in the literature. However, these methods are generally applied to scanner-based images. Hence, this paper attempts to explore the feasibility of applying a core-point detection method to a fingerprint image obtained using a camera phone. The proposed method utilizes a discrete wavelet transform to extract the ridge information from a color image. The performance of proposed method is evaluated in terms of accuracy and consistency. These two indicators are calculated automatically by comparing the method's output with the defined core points. The proposed method is tested on two data sets, controlled and uncontrolled environment, collected from 13 different subjects. In the controlled environment, the proposed method achieved a detection rate 82.98%. In uncontrolled environment, the proposed method yield a detection rate of 78.21%. The proposed method yields promising results in a collected-image database. Moreover, the proposed method outperformed compare to existing method.
Image quality assessment by preprocessing and full reference model combination
NASA Astrophysics Data System (ADS)
Bianco, S.; Ciocca, G.; Marini, F.; Schettini, R.
2009-01-01
This paper focuses on full-reference image quality assessment and presents different computational strategies aimed to improve the robustness and accuracy of some well known and widely used state of the art models, namely the Structural Similarity approach (SSIM) by Wang and Bovik and the S-CIELAB spatial-color model by Zhang and Wandell. We investigate the hypothesis that combining error images with a visual attention model could allow a better fit of the psycho-visual data of the LIVE Image Quality assessment Database Release 2. We show that the proposed quality assessment metric better correlates with the experimental data.
Image Display in Local Database Networks
NASA Astrophysics Data System (ADS)
List, James S.; Olson, Frederick R.
1989-05-01
Dearchival of image data in the form of x-ray film provides a major challenge for radiology departments. In highly active referral environments such as tertiary care hospitals, patients may be referred to multiple clinical subspecialists within a very short time. Each clinical subspecialist frequently requires diagnostic image data to complete the diagnosis. This need for image access often interferes with the normal process of film handling and interpretation, subsequently reducing the efficiency of the department. The concept of creating a local image database on individual nursing stations utilizing the AT&T CommView Results Viewing Station (RVS) is being evaluated. Initial physician acceptance has been favorable. Objective measurements of operational productivity enhancements are in progress.
Image Reference Database in Teleradiology: Migrating to WWW
NASA Astrophysics Data System (ADS)
Pasqui, Valdo
The paper presents a multimedia Image Reference Data Base (IRDB) used in Teleradiology. The application was developed at the University of Florence in the framework of the European Community TELEMED Project. TELEMED overall goals and IRDB requirements are outlined and the resulting architecture is described. IRDB is a multisite database containing radiological images, selected because their scientific interest, and their related information. The architecture consists of a set of IRDB Installations which are accessed from Viewing Stations (VS) located at different medical sites. The interaction between VS and IRDB Installations follows the client-server paradigm and uses an OSI level-7 protocol, named Telemed Communication Language. After reviewing Florence prototype implementation and experimentation, IRDB migration to World Wide Web (WWW) is discussed. A possible scenery to implement IRDB on the basis of WWW model is depicted in order to exploit WWW servers and browsers capabilities. Finally, the advantages of this conversion are outlined.
Ferreira Junior, José Raniery; Oliveira, Marcelo Costa; de Azevedo-Marques, Paulo Mazzoncini
2016-12-01
Lung cancer is the leading cause of cancer-related deaths in the world, and its main manifestation is pulmonary nodules. Detection and classification of pulmonary nodules are challenging tasks that must be done by qualified specialists, but image interpretation errors make those tasks difficult. In order to aid radiologists on those hard tasks, it is important to integrate the computer-based tools with the lesion detection, pathology diagnosis, and image interpretation processes. However, computer-aided diagnosis research faces the problem of not having enough shared medical reference data for the development, testing, and evaluation of computational methods for diagnosis. In order to minimize this problem, this paper presents a public nonrelational document-oriented cloud-based database of pulmonary nodules characterized by 3D texture attributes, identified by experienced radiologists and classified in nine different subjective characteristics by the same specialists. Our goal with the development of this database is to improve computer-aided lung cancer diagnosis and pulmonary nodule detection and classification research through the deployment of this database in a cloud Database as a Service framework. Pulmonary nodule data was provided by the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), image descriptors were acquired by a volumetric texture analysis, and database schema was developed using a document-oriented Not only Structured Query Language (NoSQL) approach. The proposed database is now with 379 exams, 838 nodules, and 8237 images, 4029 of them are CT scans and 4208 manually segmented nodules, and it is allocated in a MongoDB instance on a cloud infrastructure.
Molecular Imaging and Contrast Agent Database (MICAD): evolution and progress.
Chopra, Arvind; Shan, Liang; Eckelman, W C; Leung, Kam; Latterner, Martin; Bryant, Stephen H; Menkens, Anne
2012-02-01
The purpose of writing this review is to showcase the Molecular Imaging and Contrast Agent Database (MICAD; www.micad.nlm.nih.gov ) to students, researchers, and clinical investigators interested in the different aspects of molecular imaging. This database provides freely accessible, current, online scientific information regarding molecular imaging (MI) probes and contrast agents (CA) used for positron emission tomography, single-photon emission computed tomography, magnetic resonance imaging, X-ray/computed tomography, optical imaging and ultrasound imaging. Detailed information on >1,000 agents in MICAD is provided in a chapter format and can be accessed through PubMed. Lists containing >4,250 unique MI probes and CAs published in peer-reviewed journals and agents approved by the United States Food and Drug Administration as well as a comma separated values file summarizing all chapters in the database can be downloaded from the MICAD homepage. Users can search for agents in MICAD on the basis of imaging modality, source of signal/contrast, agent or target category, pre-clinical or clinical studies, and text words. Chapters in MICAD describe the chemical characteristics (structures linked to PubChem), the in vitro and in vivo activities, and other relevant information regarding an imaging agent. All references in the chapters have links to PubMed. A Supplemental Information Section in each chapter is available to share unpublished information regarding an agent. A Guest Author Program is available to facilitate rapid expansion of the database. Members of the imaging community registered with MICAD periodically receive an e-mail announcement (eAnnouncement) that lists new chapters uploaded to the database. Users of MICAD are encouraged to provide feedback, comments, or suggestions for further improvement of the database by writing to the editors at micad@nlm.nih.gov.
Molecular Imaging and Contrast Agent Database (MICAD): Evolution and Progress
Chopra, Arvind; Shan, Liang; Eckelman, W. C.; Leung, Kam; Latterner, Martin; Bryant, Stephen H.; Menkens, Anne
2011-01-01
The purpose of writing this review is to showcase the Molecular Imaging and Contrast Agent Database (MICAD; www.micad.nlm.nih.gov) to students, researchers and clinical investigators interested in the different aspects of molecular imaging. This database provides freely accessible, current, online scientific information regarding molecular imaging (MI) probes and contrast agents (CA) used for positron emission tomography, single-photon emission computed tomography, magnetic resonance imaging, x-ray/computed tomography, optical imaging and ultrasound imaging. Detailed information on >1000 agents in MICAD is provided in a chapter format and can be accessed through PubMed. Lists containing >4250 unique MI probes and CAs published in peer-reviewed journals and agents approved by the United States Food and Drug Administration (FDA) as well as a CSV file summarizing all chapters in the database can be downloaded from the MICAD homepage. Users can search for agents in MICAD on the basis of imaging modality, source of signal/contrast, agent or target category, preclinical or clinical studies, and text words. Chapters in MICAD describe the chemical characteristics (structures linked to PubChem), the in vitro and in vivo activities and other relevant information regarding an imaging agent. All references in the chapters have links to PubMed. A Supplemental Information Section in each chapter is available to share unpublished information regarding an agent. A Guest Author Program is available to facilitate rapid expansion of the database. Members of the imaging community registered with MICAD periodically receive an e-mail announcement (eAnnouncement) that lists new chapters uploaded to the database. Users of MICAD are encouraged to provide feedback, comments or suggestions for further improvement of the database by writing to the editors at: micad@nlm.nih.gov PMID:21989943
Adapting the ISO 20462 softcopy ruler method for online image quality studies
NASA Astrophysics Data System (ADS)
Burns, Peter D.; Phillips, Jonathan B.; Williams, Don
2013-01-01
In this paper we address the problem of Image Quality Assessment of no reference metrics, focusing on JPEG corrupted images. In general no reference metrics are not able to measure with the same performance the distortions within their possible range and with respect to different image contents. The crosstalk between content and distortion signals influences the human perception. We here propose two strategies to improve the correlation between subjective and objective quality data. The first strategy is based on grouping the images according to their spatial complexity. The second one is based on a frequency analysis. Both the strategies are tested on two databases available in the literature. The results show an improvement in the correlations between no reference metrics and psycho-visual data, evaluated in terms of the Pearson Correlation Coefficient.
Blind image quality assessment without training on human opinion scores
NASA Astrophysics Data System (ADS)
Mittal, Anish; Soundararajan, Rajiv; Muralidhar, Gautam S.; Bovik, Alan C.; Ghosh, Joydeep
2013-03-01
We propose a family of image quality assessment (IQA) models based on natural scene statistics (NSS), that can predict the subjective quality of a distorted image without reference to a corresponding distortionless image, and without any training results on human opinion scores of distorted images. These `completely blind' models compete well with standard non-blind image quality indices in terms of subjective predictive performance when tested on the large publicly available `LIVE' Image Quality database.
78 FR 70020 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-22
...; citizenship; physical characteristics; employment and military service history; credit references and credit... digital images, and in electronic databases. Background investigation forms are maintained in the...
The Descriptive Challenges of Fiber Art.
ERIC Educational Resources Information Center
Lunin, Lois F.
1990-01-01
A definition of fiber art, its history, materials and techniques, vocabularies, and creators and users of those vocabularies offer background for understanding the problems in preparing surrogates of this relatively recent art form for text and image databases. Four examples of fiber image information systems are described. (32 references) (EAM)
2016-05-04
IMESA) Access to Criminal Justice Information (CJI) and Terrorist Screening Databases (TSDB) References: See Enclosure 1 1. PURPOSE. In...CJI database mirror image files. (3) Memorandums of understanding with the FBI CJIS as the data broker for DoD organizations that need access ...not for access determinations. (3) Legal restrictions established by the Sex Offender Registration and Notification Act (SORNA) jurisdictions on
Deep supervised dictionary learning for no-reference image quality assessment
NASA Astrophysics Data System (ADS)
Huang, Yuge; Liu, Xuesong; Tian, Xiang; Zhou, Fan; Chen, Yaowu; Jiang, Rongxin
2018-03-01
We propose a deep convolutional neural network (CNN) for general no-reference image quality assessment (NR-IQA), i.e., accurate prediction of image quality without a reference image. The proposed model consists of three components such as a local feature extractor that is a fully CNN, an encoding module with an inherent dictionary that aggregates local features to output a fixed-length global quality-aware image representation, and a regression module that maps the representation to an image quality score. Our model can be trained in an end-to-end manner, and all of the parameters, including the weights of the convolutional layers, the dictionary, and the regression weights, are simultaneously learned from the loss function. In addition, the model can predict quality scores for input images of arbitrary sizes in a single step. We tested our method on commonly used image quality databases and showed that its performance is comparable with that of state-of-the-art general-purpose NR-IQA algorithms.
A new phase-correlation-based iris matching for degraded images.
Krichen, Emine; Garcia-Salicetti, Sonia; Dorizzi, Bernadette
2009-08-01
In this paper, we present a new phase-correlation-based iris matching approach in order to deal with degradations in iris images due to unconstrained acquisition procedures. Our matching system is a fusion of global and local Gabor phase-correlation schemes. The main originality of our local approach is that we do not only consider the correlation peak amplitudes but also their locations in different regions of the images. Results on several degraded databases, namely, the CASIA-BIOSECURE and Iris Challenge Evaluation 2005 databases, show the improvement of our method compared to two available reference systems, Masek and Open Source for Iris (OSRIS), in verification mode.
Feature maps driven no-reference image quality prediction of authentically distorted images
NASA Astrophysics Data System (ADS)
Ghadiyaram, Deepti; Bovik, Alan C.
2015-03-01
Current blind image quality prediction models rely on benchmark databases comprised of singly and synthetically distorted images, thereby learning image features that are only adequate to predict human perceived visual quality on such inauthentic distortions. However, real world images often contain complex mixtures of multiple distortions. Rather than a) discounting the effect of these mixtures of distortions on an image's perceptual quality and considering only the dominant distortion or b) using features that are only proven to be efficient for singly distorted images, we deeply study the natural scene statistics of authentically distorted images, in different color spaces and transform domains. We propose a feature-maps-driven statistical approach which avoids any latent assumptions about the type of distortion(s) contained in an image, and focuses instead on modeling the remarkable consistencies in the scene statistics of real world images in the absence of distortions. We design a deep belief network that takes model-based statistical image features derived from a very large database of authentically distorted images as input and discovers good feature representations by generalizing over different distortion types, mixtures, and severities, which are later used to learn a regressor for quality prediction. We demonstrate the remarkable competence of our features for improving automatic perceptual quality prediction on a benchmark database and on the newly designed LIVE Authentic Image Quality Challenge Database and show that our approach of combining robust statistical features and the deep belief network dramatically outperforms the state-of-the-art.
Pathology Imagebase-a reference image database for standardization of pathology.
Egevad, Lars; Cheville, John; Evans, Andrew J; Hörnblad, Jonas; Kench, James G; Kristiansen, Glen; Leite, Katia R M; Magi-Galluzzi, Cristina; Pan, Chin-Chen; Samaratunga, Hemamali; Srigley, John R; True, Lawrence; Zhou, Ming; Clements, Mark; Delahunt, Brett
2017-11-01
Despite efforts to standardize histopathology practice through the development of guidelines, the interpretation of morphology is still hampered by subjectivity. We here describe Pathology Imagebase, a novel mechanism for establishing an international standard for the interpretation of pathology specimens. The International Society of Urological Pathology (ISUP) established a reference image database through the input of experts in the field. Three panels were formed, one each for prostate, urinary bladder and renal pathology, consisting of 24 international experts. Each of the panel members uploaded microphotographs of cases into a non-public database. The remaining 23 experts were asked to vote from a multiple-choice menu. Prior to and while voting, panel members were unable to access the results of voting by the other experts. When a consensus level of at least two-thirds or 16 votes was reached, cases were automatically transferred to the main database. Consensus was reached in a total of 287 cases across five projects on the grading of prostate, bladder and renal cancer and the classification of renal tumours and flat lesions of the bladder. The full database is available to all ISUP members at www.isupweb.org. Non-members may access a selected number of cases. It is anticipated that the database will assist pathologists in calibrating their grading, and will also promote consistency in the diagnosis of difficult cases. © 2017 John Wiley & Sons Ltd.
Cho, Woon; Jang, Jinbeum; Koschan, Andreas; Abidi, Mongi A; Paik, Joonki
2016-11-28
A fundamental limitation of hyperspectral imaging is the inter-band misalignment correlated with subject motion during data acquisition. One way of resolving this problem is to assess the alignment quality of hyperspectral image cubes derived from the state-of-the-art alignment methods. In this paper, we present an automatic selection framework for the optimal alignment method to improve the performance of face recognition. Specifically, we develop two qualitative prediction models based on: 1) a principal curvature map for evaluating the similarity index between sequential target bands and a reference band in the hyperspectral image cube as a full-reference metric; and 2) the cumulative probability of target colors in the HSV color space for evaluating the alignment index of a single sRGB image rendered using all of the bands of the hyperspectral image cube as a no-reference metric. We verify the efficacy of the proposed metrics on a new large-scale database, demonstrating a higher prediction accuracy in determining improved alignment compared to two full-reference and five no-reference image quality metrics. We also validate the ability of the proposed framework to improve hyperspectral face recognition.
Combined semantic and similarity search in medical image databases
NASA Astrophysics Data System (ADS)
Seifert, Sascha; Thoma, Marisa; Stegmaier, Florian; Hammon, Matthias; Kramer, Martin; Huber, Martin; Kriegel, Hans-Peter; Cavallaro, Alexander; Comaniciu, Dorin
2011-03-01
The current diagnostic process at hospitals is mainly based on reviewing and comparing images coming from multiple time points and modalities in order to monitor disease progression over a period of time. However, for ambiguous cases the radiologist deeply relies on reference literature or second opinion. Although there is a vast amount of acquired images stored in PACS systems which could be reused for decision support, these data sets suffer from weak search capabilities. Thus, we present a search methodology which enables the physician to fulfill intelligent search scenarios on medical image databases combining ontology-based semantic and appearance-based similarity search. It enabled the elimination of 12% of the top ten hits which would arise without taking the semantic context into account.
Error and Uncertainty in the Accuracy Assessment of Land Cover Maps
NASA Astrophysics Data System (ADS)
Sarmento, Pedro Alexandre Reis
Traditionally the accuracy assessment of land cover maps is performed through the comparison of these maps with a reference database, which is intended to represent the "real" land cover, being this comparison reported with the thematic accuracy measures through confusion matrixes. Although, these reference databases are also a representation of reality, containing errors due to the human uncertainty in the assignment of the land cover class that best characterizes a certain area, causing bias in the thematic accuracy measures that are reported to the end users of these maps. The main goal of this dissertation is to develop a methodology that allows the integration of human uncertainty present in reference databases in the accuracy assessment of land cover maps, and analyse the impacts that uncertainty may have in the thematic accuracy measures reported to the end users of land cover maps. The utility of the inclusion of human uncertainty in the accuracy assessment of land cover maps is investigated. Specifically we studied the utility of fuzzy sets theory, more precisely of fuzzy arithmetic, for a better understanding of human uncertainty associated to the elaboration of reference databases, and their impacts in the thematic accuracy measures that are derived from confusion matrixes. For this purpose linguistic values transformed in fuzzy intervals that address the uncertainty in the elaboration of reference databases were used to compute fuzzy confusion matrixes. The proposed methodology is illustrated using a case study in which the accuracy assessment of a land cover map for Continental Portugal derived from Medium Resolution Imaging Spectrometer (MERIS) is made. The obtained results demonstrate that the inclusion of human uncertainty in reference databases provides much more information about the quality of land cover maps, when compared with the traditional approach of accuracy assessment of land cover maps. None
The wavelet/scalar quantization compression standard for digital fingerprint images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, J.N.; Brislawn, C.M.
1994-04-01
A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.
Fish Karyome: A karyological information network database of Indian Fishes.
Nagpure, Naresh Sahebrao; Pathak, Ajey Kumar; Pati, Rameshwar; Singh, Shri Prakash; Singh, Mahender; Sarkar, Uttam Kumar; Kushwaha, Basdeo; Kumar, Ravindra
2012-01-01
'Fish Karyome', a database on karyological information of Indian fishes have been developed that serves as central source for karyotype data about Indian fishes compiled from the published literature. Fish Karyome has been intended to serve as a liaison tool for the researchers and contains karyological information about 171 out of 2438 finfish species reported in India and is publically available via World Wide Web. The database provides information on chromosome number, morphology, sex chromosomes, karyotype formula and cytogenetic markers etc. Additionally, it also provides the phenotypic information that includes species name, its classification, and locality of sample collection, common name, local name, sex, geographical distribution, and IUCN Red list status. Besides, fish and karyotype images, references for 171 finfish species have been included in the database. Fish Karyome has been developed using SQL Server 2008, a relational database management system, Microsoft's ASP.NET-2008 and Macromedia's FLASH Technology under Windows 7 operating environment. The system also enables users to input new information and images into the database, search and view the information and images of interest using various search options. Fish Karyome has wide range of applications in species characterization and identification, sex determination, chromosomal mapping, karyo-evolution and systematics of fishes.
Storage and retrieval of digital images in dermatology.
Bittorf, A; Krejci-Papa, N C; Diepgen, T L
1995-11-01
Differential diagnosis in dermatology relies on the interpretation of visual information in the form of clinical and histopathological images. Up until now, reference images have had to be retrieved from textbooks and/or appropriate journals. To overcome inherent limitations of those storage media with respect to the number of images stored, display, and search parameters available, we designed a computer-based database of digitized dermatologic images. Images were taken from the photo archive of the Dermatological Clinic of the University of Erlangen. A database was designed using the Entity-Relationship approach. It was implemented on a PC-Windows platform using MS Access* and MS Visual Basic®. As WWW-server a Sparc 10 workstation was used with the CERN Hypertext-Transfer-Protocol-Daemon (httpd) 3.0 pre 6 software running. For compressed storage on a hard drive, a quality factor of 60 allowed on-screen differential diagnosis and corresponded to a compression factor of 1:35 for clinical images and 1:40 for histopathological images. Hierarchical keys of clinical or histopathological criteria permitted multi-criteria searches. A script using the Common Gateway Interface (CGI) enabled remote search and image retrieval via the World-Wide-Web (W3). A dermatologic image database, featurig clinical and histopathological images was constructed which allows for multi-parameter searches and world-wide remote access.
Detection and Rectification of Distorted Fingerprints.
Si, Xuanbin; Feng, Jianjiang; Zhou, Jie; Luo, Yuxuan
2015-03-01
Elastic distortion of fingerprints is one of the major causes for false non-match. While this problem affects all fingerprint recognition applications, it is especially dangerous in negative recognition applications, such as watchlist and deduplication applications. In such applications, malicious users may purposely distort their fingerprints to evade identification. In this paper, we proposed novel algorithms to detect and rectify skin distortion based on a single fingerprint image. Distortion detection is viewed as a two-class classification problem, for which the registered ridge orientation map and period map of a fingerprint are used as the feature vector and a SVM classifier is trained to perform the classification task. Distortion rectification (or equivalently distortion field estimation) is viewed as a regression problem, where the input is a distorted fingerprint and the output is the distortion field. To solve this problem, a database (called reference database) of various distorted reference fingerprints and corresponding distortion fields is built in the offline stage, and then in the online stage, the nearest neighbor of the input fingerprint is found in the reference database and the corresponding distortion field is used to transform the input fingerprint into a normal one. Promising results have been obtained on three databases containing many distorted fingerprints, namely FVC2004 DB1, Tsinghua Distorted Fingerprint database, and the NIST SD27 latent fingerprint database.
Absolute stellar photometry on moderate-resolution FPA images
Stone, T.C.
2009-01-01
An extensive database of star (and Moon) images has been collected by the ground-based RObotic Lunar Observatory (ROLO) as part of the US Geological Survey program for lunar calibration. The stellar data are used to derive nightly atmospheric corrections for the observations from extinction measurements, and absolute calibration of the ROLO sensors is based on observations of Vega and published reference flux and spectrum data. The ROLO telescopes were designed for imaging the Moon at moderate resolution, thus imposing some limitations for the stellar photometry. Attaining accurate stellar photometry with the ROLO image data has required development of specialized processing techniques. A key consideration is consistency in discriminating the star core signal from the off-axis point spread function. The analysis and processing methods applied to the ROLO stellar image database are described. ?? 2009 BIPM and IOP Publishing Ltd.
Incorporating the APS Catalog of the POSS I and Image Archive in ADS
NASA Technical Reports Server (NTRS)
Humphreys, Roberta M.
1998-01-01
The primary purpose of this contract was to develop the software to both create and access an on-line database of images from digital scans of the Palomar Sky Survey. This required modifying our DBMS (called Star Base) to create an image database from the actual raw pixel data from the scans. The digitized images are processed into a set of coordinate-reference index and pixel files that are stored in run-length files, thus achieving an efficient lossless compression. For efficiency and ease of referencing, each digitized POSS I plate is then divided into 900 subplates. Our custom DBMS maps each query into the corresponding POSS plate(s) and subplate(s). All images from the appropriate subplates are retrieved from disk with byte-offsets taken from the index files. These are assembled on-the-fly into a GIF image file for browser display, and a FITS format image file for retrieval. The FITS images have a pixel size of 0.33 arcseconds. The FITS header contains astrometric and photometric information. This method keeps the disk requirements manageable while allowing for future improvements. When complete, the APS Image Database will contain over 130 Gb of data. A set of web pages query forms are available on-line, as well as an on-line tutorial and documentation. The database is distributed to the Internet by a high-speed SGI server and a high-bandwidth disk system. URL is http://aps.umn.edu/IDB/. The image database software is written in perl and C and has been compiled on SGI computers with MIX5.3. A copy of the written documentation is included and the software is on the accompanying exabyte tape.
A new scale for the assessment of conjunctival bulbar redness.
Macchi, Ilaria; Bunya, Vatinee Y; Massaro-Giordano, Mina; Stone, Richard A; Maguire, Maureen G; Zheng, Yuanjie; Chen, Min; Gee, James; Smith, Eli; Daniel, Ebenezer
2018-06-05
Current scales for assessment of bulbar conjunctival redness have limitations for evaluating digital images. We developed a scale suited for evaluating digital images and compared it to the Validated Bulbar Redness (VBR) scale. From a digital image database of 4889 color corrected bulbar conjunctival images, we identified 20 images with varied degrees of redness. These images, ten each of nasal and temporal views, constitute the Digital Bulbar Redness (DBR) scale. The chromaticity of these images was assessed with an established image processing algorithm. Using 100 unique, randomly selected images from the database, three trained, non-physician graders applied the DBR scale and printed VBR scale. Agreement was assessed with weighted Kappa statistics (K w ). The DBR scale scores provide linear increments of 10 from 10-100 when redness is measured objectively with an established image processing algorithm. Exact agreement of all graders was 38% and agreement with no more than a difference of ten units between graders was 91%. K w for agreement between any two graders ranged from 0.57 to 0.73 for the DBR scale and from 0.38 to 0.66 for the VBR scale. The DBR scale allowed direct comparison of digital to digital images, could be used in dim lighting, had both temporal and nasal conjunctival reference images, and permitted viewing reference and test images at the same magnification. The novel DBR scale, with its objective linear chromatic steps, demonstrated improved reproducibility, fewer visualization artifacts and improved ease of use over the VBR scale for assessing conjunctival redness. Copyright © 2018. Published by Elsevier Inc.
Dimai, Hans P
2017-11-01
Dual-energy X-ray absorptiometry (DXA) is a two-dimensional imaging technology developed to assess bone mineral density (BMD) of the entire human skeleton and also specifically of skeletal sites known to be most vulnerable to fracture. In order to simplify interpretation of BMD measurement results and allow comparability among different DXA-devices, the T-score concept was introduced. This concept involves an individual's BMD which is then compared with the mean value of a young healthy reference population, with the difference expressed as a standard deviation (SD). Since the early nineties of the past century, the diagnostic categories "normal, osteopenia, and osteoporosis", as recommended by a WHO working Group, are based on this concept. Thus, DXA is still the globally accepted "gold-standard" method for the noninvasive diagnosis of osteoporosis. Another score obtained from DXA measurement, termed Z-score, describes the number of SDs by which the BMD in an individual differs from the mean value expected for age and sex. Although not intended for diagnosis of osteoporosis in adults, it nevertheless provides information about an individual's fracture risk compared to peers. DXA measurement can either be used as a "stand-alone" means in the assessment of an individual's fracture risk, or incorporated into one of the available fracture risk assessment tools such as FRAX® or Garvan, thus improving the predictive power of such tools. The issue which reference databases should be used by DXA-device manufacturers for T-score reference standards has been recently addressed by an expert group, who recommended use National Health and Nutrition Examination Survey III (NHANES III) databases for the hip reference standard but own databases for the lumbar spine. Furthermore, in men it is recommended use female reference databases for calculation of the T-score and use male reference databases for calculation of Z-score. Copyright © 2017 Elsevier Inc. All rights reserved.
Performance evaluation of no-reference image quality metrics for face biometric images
NASA Astrophysics Data System (ADS)
Liu, Xinwei; Pedersen, Marius; Charrier, Christophe; Bours, Patrick
2018-03-01
The accuracy of face recognition systems is significantly affected by the quality of face sample images. The recent established standardization proposed several important aspects for the assessment of face sample quality. There are many existing no-reference image quality metrics (IQMs) that are able to assess natural image quality by taking into account similar image-based quality attributes as introduced in the standardization. However, whether such metrics can assess face sample quality is rarely considered. We evaluate the performance of 13 selected no-reference IQMs on face biometrics. The experimental results show that several of them can assess face sample quality according to the system performance. We also analyze the strengths and weaknesses of different IQMs as well as why some of them failed to assess face sample quality. Retraining an original IQM by using face database can improve the performance of such a metric. In addition, the contribution of this paper can be used for the evaluation of IQMs on other biometric modalities; furthermore, it can be used for the development of multimodality biometric IQMs.
Data mining and visualization of average images in a digital hand atlas
NASA Astrophysics Data System (ADS)
Zhang, Aifeng; Gertych, Arkadiusz; Liu, Brent J.; Huang, H. K.
2005-04-01
We have collected a digital hand atlas containing digitized left hand radiographs of normally developed children grouped accordingly by age, sex, and race. A set of features stored in a database reflecting patient's stage of skeletal development has been calculated by automatic image processing procedures. This paper addresses a new concept, "average" image in the digital hand atlas. The "average" reference image in the digital atlas is selected for each of the groups of normal developed children with the best representative skeletal maturity based on bony features. A data mining procedure was designed and applied to find the average image through average feature vector matching. It also provides a temporary solution for the missing feature problem through polynomial regression. As more cases are added to the digital hand atlas, it can grow to provide clinicians accurate reference images to aid the bone age assessment process.
No-reference image quality assessment based on statistics of convolution feature maps
NASA Astrophysics Data System (ADS)
Lv, Xiaoxin; Qin, Min; Chen, Xiaohui; Wei, Guo
2018-04-01
We propose a Convolutional Feature Maps (CFM) driven approach to accurately predict image quality. Our motivation bases on the finding that the Nature Scene Statistic (NSS) features on convolution feature maps are significantly sensitive to distortion degree of an image. In our method, a Convolutional Neural Network (CNN) is trained to obtain kernels for generating CFM. We design a forward NSS layer which performs on CFM to better extract NSS features. The quality aware features derived from the output of NSS layer is effective to describe the distortion type and degree an image suffered. Finally, a Support Vector Regression (SVR) is employed in our No-Reference Image Quality Assessment (NR-IQA) model to predict a subjective quality score of a distorted image. Experiments conducted on two public databases demonstrate the promising performance of the proposed method is competitive to state of the art NR-IQA methods.
Construction and comparative evaluation of different activity detection methods in brain FDG-PET.
Buchholz, Hans-Georg; Wenzel, Fabian; Gartenschläger, Martin; Thiele, Frank; Young, Stewart; Reuss, Stefan; Schreckenberger, Mathias
2015-08-18
We constructed and evaluated reference brain FDG-PET databases for usage by three software programs (Computer-aided diagnosis for dementia (CAD4D), Statistical Parametric Mapping (SPM) and NEUROSTAT), which allow a user-independent detection of dementia-related hypometabolism in patients' brain FDG-PET. Thirty-seven healthy volunteers were scanned in order to construct brain FDG reference databases, which reflect the normal, age-dependent glucose consumption in human brain, using either software. Databases were compared to each other to assess the impact of different stereotactic normalization algorithms used by either software package. In addition, performance of the new reference databases in the detection of altered glucose consumption in the brains of patients was evaluated by calculating statistical maps of regional hypometabolism in FDG-PET of 20 patients with confirmed Alzheimer's dementia (AD) and of 10 non-AD patients. Extent (hypometabolic volume referred to as cluster size) and magnitude (peak z-score) of detected hypometabolism was statistically analyzed. Differences between the reference databases built by CAD4D, SPM or NEUROSTAT were observed. Due to the different normalization methods, altered spatial FDG patterns were found. When analyzing patient data with the reference databases created using CAD4D, SPM or NEUROSTAT, similar characteristic clusters of hypometabolism in the same brain regions were found in the AD group with either software. However, larger z-scores were observed with CAD4D and NEUROSTAT than those reported by SPM. Better concordance with CAD4D and NEUROSTAT was achieved using the spatially normalized images of SPM and an independent z-score calculation. The three software packages identified the peak z-scores in the same brain region in 11 of 20 AD cases, and there was concordance between CAD4D and SPM in 16 AD subjects. The clinical evaluation of brain FDG-PET of 20 AD patients with either CAD4D-, SPM- or NEUROSTAT-generated databases from an identical reference dataset showed similar patterns of hypometabolism in the brain regions known to be involved in AD. The extent of hypometabolism and peak z-score appeared to be influenced by the calculation method used in each software package rather than by different spatial normalization parameters.
Ishihara, Masaru; Onoguchi, Masahisa; Taniguchi, Yasuyo; Shibutani, Takayuki
2017-12-01
The aim of this study was to clarify the differences in thallium-201-chloride (thallium-201) myocardial perfusion imaging (MPI) scans evaluated by conventional anger-type single-photon emission computed tomography (conventional SPECT) versus cadmium-zinc-telluride SPECT (CZT SPECT) imaging in normal databases for different ethnic groups. MPI scans from 81 consecutive Japanese patients were examined using conventional SPECT and CZT SPECT and analyzed with the pre-installed quantitative perfusion SPECT (QPS) software. We compared the summed stress score (SSS), summed rest score (SRS), and summed difference score (SDS) for the two SPECT devices. For a normal MPI reference, we usually use Japanese databases for MPI created by the Japanese Society of Nuclear Medicine, which can be used with conventional SPECT but not with CZT SPECT. In this study, we used new Japanese normal databases constructed in our institution to compare conventional and CZT SPECT. Compared with conventional SPECT, CZT SPECT showed lower SSS (p < 0.001), SRS (p = 0.001), and SDS (p = 0.189) using the pre-installed SPECT database. In contrast, CZT SPECT showed no significant difference from conventional SPECT in QPS analysis using the normal databases from our institution. Myocardial perfusion analyses by CZT SPECT should be evaluated using normal databases based on the ethnic group being evaluated.
An interactive system for computer-aided diagnosis of breast masses.
Wang, Xingwei; Li, Lihua; Liu, Wei; Xu, Weidong; Lederman, Dror; Zheng, Bin
2012-10-01
Although mammography is the only clinically accepted imaging modality for screening the general population to detect breast cancer, interpreting mammograms is difficult with lower sensitivity and specificity. To provide radiologists "a visual aid" in interpreting mammograms, we developed and tested an interactive system for computer-aided detection and diagnosis (CAD) of mass-like cancers. Using this system, an observer can view CAD-cued mass regions depicted on one image and then query any suspicious regions (either cued or not cued by CAD). CAD scheme automatically segments the suspicious region or accepts manually defined region and computes a set of image features. Using content-based image retrieval (CBIR) algorithm, CAD searches for a set of reference images depicting "abnormalities" similar to the queried region. Based on image retrieval results and a decision algorithm, a classification score is assigned to the queried region. In this study, a reference database with 1,800 malignant mass regions and 1,800 benign and CAD-generated false-positive regions was used. A modified CBIR algorithm with a new function of stretching the attributes in the multi-dimensional space and decision scheme was optimized using a genetic algorithm. Using a leave-one-out testing method to classify suspicious mass regions, we compared the classification performance using two CBIR algorithms with either equally weighted or optimally stretched attributes. Using the modified CBIR algorithm, the area under receiver operating characteristic curve was significantly increased from 0.865 ± 0.006 to 0.897 ± 0.005 (p < 0.001). This study demonstrated the feasibility of developing an interactive CAD system with a large reference database and achieving improved performance.
Rice proteome analysis: a step toward functional analysis of the rice genome.
Komatsu, Setsuko; Tanaka, Naoki
2005-03-01
The technique of proteome analysis using 2-DE has the power to monitor global changes that occur in the protein complement of tissues and subcellular compartments. In this review, we describe construction of the rice proteome database, the cataloging of rice proteins, and the functional characterization of some of the proteins identified. Initially, proteins extracted from various tissues and organelles were separated by 2-DE and an image analyzer was used to construct a display or reference map of the proteins. The rice proteome database currently contains 23 reference maps based on 2-DE of proteins from different rice tissues and subcellular compartments. These reference maps comprise 13 129 rice proteins, and the amino acid sequences of 5092 of these proteins are entered in the database. Major proteins involved in growth or stress responses have been identified by using a proteomics approach and some of these proteins have unique functions. Furthermore, initial work has also begun on analyzing the phosphoproteome and protein-protein interactions in rice. The information obtained from the rice proteome database will aid in the molecular cloning of rice genes and in predicting the function of unknown proteins.
A data model and database for high-resolution pathology analytical image informatics.
Wang, Fusheng; Kong, Jun; Cooper, Lee; Pan, Tony; Kurc, Tahsin; Chen, Wenjin; Sharma, Ashish; Niedermayr, Cristobal; Oh, Tae W; Brat, Daniel; Farris, Alton B; Foran, David J; Saltz, Joel
2011-01-01
The systematic analysis of imaged pathology specimens often results in a vast amount of morphological information at both the cellular and sub-cellular scales. While microscopy scanners and computerized analysis are capable of capturing and analyzing data rapidly, microscopy image data remain underutilized in research and clinical settings. One major obstacle which tends to reduce wider adoption of these new technologies throughout the clinical and scientific communities is the challenge of managing, querying, and integrating the vast amounts of data resulting from the analysis of large digital pathology datasets. This paper presents a data model, which addresses these challenges, and demonstrates its implementation in a relational database system. This paper describes a data model, referred to as Pathology Analytic Imaging Standards (PAIS), and a database implementation, which are designed to support the data management and query requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines on whole-slide images and tissue microarrays (TMAs). (1) Development of a data model capable of efficiently representing and storing virtual slide related image, annotation, markup, and feature information. (2) Development of a database, based on the data model, capable of supporting queries for data retrieval based on analysis and image metadata, queries for comparison of results from different analyses, and spatial queries on segmented regions, features, and classified objects. The work described in this paper is motivated by the challenges associated with characterization of micro-scale features for comparative and correlative analyses involving whole-slides tissue images and TMAs. Technologies for digitizing tissues have advanced significantly in the past decade. Slide scanners are capable of producing high-magnification, high-resolution images from whole slides and TMAs within several minutes. Hence, it is becoming increasingly feasible for basic, clinical, and translational research studies to produce thousands of whole-slide images. Systematic analysis of these large datasets requires efficient data management support for representing and indexing results from hundreds of interrelated analyses generating very large volumes of quantifications such as shape and texture and of classifications of the quantified features. We have designed a data model and a database to address the data management requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines. The data model represents virtual slide related image, annotation, markup and feature information. The database supports a wide range of metadata and spatial queries on images, annotations, markups, and features. We currently have three databases running on a Dell PowerEdge T410 server with CentOS 5.5 Linux operating system. The database server is IBM DB2 Enterprise Edition 9.7.2. The set of databases consists of 1) a TMA database containing image analysis results from 4740 cases of breast cancer, with 641 MB storage size; 2) an algorithm validation database, which stores markups and annotations from two segmentation algorithms and two parameter sets on 18 selected slides, with 66 GB storage size; and 3) an in silico brain tumor study database comprising results from 307 TCGA slides, with 365 GB storage size. The latter two databases also contain human-generated annotations and markups for regions and nuclei. Modeling and managing pathology image analysis results in a database provide immediate benefits on the value and usability of data in a research study. The database provides powerful query capabilities, which are otherwise difficult or cumbersome to support by other approaches such as programming languages. Standardized, semantic annotated data representation and interfaces also make it possible to more efficiently share image data and analysis results.
Wei, C P; Hu, P J; Sheng, O R
2001-03-01
When performing primary reading on a newly taken radiological examination, a radiologist often needs to reference relevant prior images of the same patient for confirmation or comparison purposes. Support of such image references is of clinical importance and may have significant effects on radiologists' examination reading efficiency, service quality, and work satisfaction. To effectively support such image reference needs, we proposed and developed a knowledge-based patient image pre-fetching system, addressing several challenging requirements of the application that include representation and learning of image reference heuristics and management of data-intensive knowledge inferencing. Moreover, the system demands an extensible and maintainable architecture design capable of effectively adapting to a dynamic environment characterized by heterogeneous and autonomous data source systems. In this paper, we developed a synthesized object-oriented entity- relationship model, a conceptual model appropriate for representing radiologists' prior image reference heuristics that are heuristic oriented and data intensive. We detailed the system architecture and design of the knowledge-based patient image pre-fetching system. Our architecture design is based on a client-mediator-server framework, capable of coping with a dynamic environment characterized by distributed, heterogeneous, and highly autonomous data source systems. To adapt to changes in radiologists' patient prior image reference heuristics, ID3-based multidecision-tree induction and CN2-based multidecision induction learning techniques were developed and evaluated. Experimentally, we examined effects of the pre-fetching system we created on radiologists' examination readings. Preliminary results show that the knowledge-based patient image pre-fetching system more accurately supports radiologists' patient prior image reference needs than the current practice adopted at the study site and that radiologists may become more efficient, consultatively effective, and better satisfied when supported by the pre-fetching system than when relying on the study site's pre-fetching practice.
IMAGESEER - IMAGEs for Education and Research
NASA Technical Reports Server (NTRS)
Le Moigne, Jacqueline; Grubb, Thomas; Milner, Barbara
2012-01-01
IMAGESEER is a new Web portal that brings easy access to NASA image data for non-NASA researchers, educators, and students. The IMAGESEER Web site and database are specifically designed to be utilized by the university community, to enable teaching image processing (IP) techniques on NASA data, as well as to provide reference benchmark data to validate new IP algorithms. Along with the data and a Web user interface front-end, basic knowledge of the application domains, benchmark information, and specific NASA IP challenges (or case studies) are provided.
Personal identification based on blood vessels of retinal fundus images
NASA Astrophysics Data System (ADS)
Fukuta, Keisuke; Nakagawa, Toshiaki; Hayashi, Yoshinori; Hatanaka, Yuji; Hara, Takeshi; Fujita, Hiroshi
2008-03-01
Biometric technique has been implemented instead of conventional identification methods such as password in computer, automatic teller machine (ATM), and entrance and exit management system. We propose a personal identification (PI) system using color retinal fundus images which are unique to each individual. The proposed procedure for identification is based on comparison of an input fundus image with reference fundus images in the database. In the first step, registration between the input image and the reference image is performed. The step includes translational and rotational movement. The PI is based on the measure of similarity between blood vessel images generated from the input and reference images. The similarity measure is defined as the cross-correlation coefficient calculated from the pixel values. When the similarity is greater than a predetermined threshold, the input image is identified. This means both the input and the reference images are associated to the same person. Four hundred sixty-two fundus images including forty-one same-person's image pairs were used for the estimation of the proposed technique. The false rejection rate and the false acceptance rate were 9.9×10 -5% and 4.3×10 -5%, respectively. The results indicate that the proposed method has a higher performance than other biometrics except for DNA. To be used for practical application in the public, the device which can take retinal fundus images easily is needed. The proposed method is applied to not only the PI but also the system which warns about misfiling of fundus images in medical facilities.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-15
... restrictions is set forth in CBP Dec. 06-09. The Designated List and accompanying image database may also be... reference to ``CBP Dec. 06-09'', the words ``extended by CBP Dec. 11-06''. Alan Bersin, Commissioner, U.S...
Night Vision Goggle Training; Development and Production of Six Video Programs
1992-11-01
SUIUECT TERMS Multimedia Video production iS. NUMBER OF PAGES Aeral photography Night vision Videodisc 18 Image Intensification Night vision goggles...reference tool on the squadron or wing demonstrates NVG field of view, field of level. The programs run approximately ten regard, scan techniques, image...training device modalities. These The production of a videodisc that modalities include didactic and video will serve as an NVG audio-visual database
Normal Databases for the Relative Quantification of Myocardial Perfusion
Rubeaux, Mathieu; Xu, Yuan; Germano, Guido; Berman, Daniel S.; Slomka, Piotr J.
2016-01-01
Purpose of review Myocardial perfusion imaging (MPI) with SPECT is performed clinically worldwide to detect and monitor coronary artery disease (CAD). MPI allows an objective quantification of myocardial perfusion at stress and rest. This established technique relies on normal databases to compare patient scans against reference normal limits. In this review, we aim to introduce the process of MPI quantification with normal databases and describe the associated perfusion quantitative measures that are used. Recent findings New equipment and new software reconstruction algorithms have been introduced which require the development of new normal limits. The appearance and regional count variations of normal MPI scan may differ between these new scanners and standard Anger cameras. Therefore, these new systems may require the determination of new normal limits to achieve optimal accuracy in relative myocardial perfusion quantification. Accurate diagnostic and prognostic results rivaling those obtained by expert readers can be obtained by this widely used technique. Summary Throughout this review, we emphasize the importance of the different normal databases and the need for specific databases relative to distinct imaging procedures. use of appropriate normal limits allows optimal quantification of MPI by taking into account subtle image differences due to the hardware and software used, and the population studied. PMID:28138354
Horsch, Alexander; Hapfelmeier, Alexander; Elter, Matthias
2011-11-01
Breast cancer is globally a major threat for women's health. Screening and adequate follow-up can significantly reduce the mortality from breast cancer. Human second reading of screening mammograms can increase breast cancer detection rates, whereas this has not been proven for current computer-aided detection systems as "second reader". Critical factors include the detection accuracy of the systems and the screening experience and training of the radiologist with the system. When assessing the performance of systems and system components, the choice of evaluation methods is particularly critical. Core assets herein are reference image databases and statistical methods. We have analyzed characteristics and usage of the currently largest publicly available mammography database, the Digital Database for Screening Mammography (DDSM) from the University of South Florida, in literature indexed in Medline, IEEE Xplore, SpringerLink, and SPIE, with respect to type of computer-aided diagnosis (CAD) (detection, CADe, or diagnostics, CADx), selection of database subsets, choice of evaluation method, and quality of descriptions. 59 publications presenting 106 evaluation studies met our selection criteria. In 54 studies (50.9%), the selection of test items (cases, images, regions of interest) extracted from the DDSM was not reproducible. Only 2 CADx studies, not any CADe studies, used the entire DDSM. The number of test items varies from 100 to 6000. Different statistical evaluation methods are chosen. Most common are train/test (34.9% of the studies), leave-one-out (23.6%), and N-fold cross-validation (18.9%). Database-related terminology tends to be imprecise or ambiguous, especially regarding the term "case". Overall, both the use of the DDSM as data source for evaluation of mammography CAD systems, and the application of statistical evaluation methods were found highly diverse. Results reported from different studies are therefore hardly comparable. Drawbacks of the DDSM (e.g. varying quality of lesion annotations) may contribute to the reasons. But larger bias seems to be caused by authors' own decisions upon study design. RECOMMENDATIONS/CONCLUSION: For future evaluation studies, we derive a set of 13 recommendations concerning the construction and usage of a test database, as well as the application of statistical evaluation methods.
Accurate registration of temporal CT images for pulmonary nodules detection
NASA Astrophysics Data System (ADS)
Yan, Jichao; Jiang, Luan; Li, Qiang
2017-02-01
Interpretation of temporal CT images could help the radiologists to detect some subtle interval changes in the sequential examinations. The purpose of this study was to develop a fully automated scheme for accurate registration of temporal CT images for pulmonary nodule detection. Our method consisted of three major registration steps. Firstly, affine transformation was applied in the segmented lung region to obtain global coarse registration images. Secondly, B-splines based free-form deformation (FFD) was used to refine the coarse registration images. Thirdly, Demons algorithm was performed to align the feature points extracted from the registered images in the second step and the reference images. Our database consisted of 91 temporal CT cases obtained from Beijing 301 Hospital and Shanghai Changzheng Hospital. The preliminary results showed that approximately 96.7% cases could obtain accurate registration based on subjective observation. The subtraction images of the reference images and the rigid and non-rigid registered images could effectively remove the normal structures (i.e. blood vessels) and retain the abnormalities (i.e. pulmonary nodules). This would be useful for the screening of lung cancer in our future study.
Hunter, Susan B.; Vauterin, Paul; Lambert-Fair, Mary Ann; Van Duyne, M. Susan; Kubota, Kristy; Graves, Lewis; Wrigley, Donna; Barrett, Timothy; Ribot, Efrain
2005-01-01
The PulseNet National Database, established by the Centers for Disease Control and Prevention in 1996, consists of pulsed-field gel electrophoresis (PFGE) patterns obtained from isolates of food-borne pathogens (currently Escherichia coli O157:H7, Salmonella, Shigella, and Listeria) and textual information about the isolates. Electronic images and accompanying text are submitted from over 60 U.S. public health and food regulatory agency laboratories. The PFGE patterns are generated according to highly standardized PFGE protocols. Normalization and accurate comparison of gel images require the use of a well-characterized size standard in at least three lanes of each gel. Originally, a well-characterized strain of each organism was chosen as the reference standard for that particular database. The increasing number of databases, difficulty in identifying an organism-specific standard for each database, the increased range of band sizes generated by the use of additional restriction endonucleases, and the maintenance of many different organism-specific strains encouraged us to search for a more versatile and universal DNA size marker. A Salmonella serotype Braenderup strain (H9812) was chosen as the universal size standard. This strain was subjected to rigorous testing in our laboratories to ensure that it met the desired criteria, including coverage of a wide range of DNA fragment sizes, even distribution of bands, and stability of the PFGE pattern. The strategy used to convert and compare data generated by the new and old reference standards is described. PMID:15750058
36 CFR 1225.24 - When can an agency apply previously approved schedules to electronic records?
Code of Federal Regulations, 2012 CFR
2012-07-01
... Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT SCHEDULING RECORDS § 1225.24 When... must notify the National Archives and Records Administration, Modern Records Programs (NWM), 8601... authority reference; and (v) Format of the records (e.g., database, scanned images, digital photographs, etc...
36 CFR 1225.24 - When can an agency apply previously approved schedules to electronic records?
Code of Federal Regulations, 2011 CFR
2011-07-01
... Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT SCHEDULING RECORDS § 1225.24 When... must notify the National Archives and Records Administration, Modern Records Programs (NWM), 8601... authority reference; and (v) Format of the records (e.g., database, scanned images, digital photographs, etc...
36 CFR 1225.24 - When can an agency apply previously approved schedules to electronic records?
Code of Federal Regulations, 2014 CFR
2014-07-01
... Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT SCHEDULING RECORDS § 1225.24 When... must notify the National Archives and Records Administration, Modern Records Programs (NWM), 8601... authority reference; and (v) Format of the records (e.g., database, scanned images, digital photographs, etc...
An Efficient, Lossless Database for Storing and Transmitting Medical Images
NASA Technical Reports Server (NTRS)
Fenstermacher, Marc J.
1998-01-01
This research aimed in creating new compression methods based on the central idea of Set Redundancy Compression (SRC). Set Redundancy refers to the common information that exists in a set of similar images. SRC compression methods take advantage of this common information and can achieve improved compression of similar images by reducing their Set Redundancy. The current research resulted in the development of three new lossless SRC compression methods: MARS (Median-Aided Region Sorting), MAZE (Max-Aided Zero Elimination) and MaxGBA (Max-Guided Bit Allocation).
Analysis and correction of Landsat 4 and 5 Thematic Mapper Sensor Data
NASA Technical Reports Server (NTRS)
Bernstein, R.; Hanson, W. A.
1985-01-01
Procedures for the correction and registration and registration of Landsat TM image data are examined. The registration of Landsat-4 TM images of San Francisco to Landsat-5 TM images of the San Francisco using the interactive geometric correction program and the cross-correlation technique is described. The geometric correction program and cross-correlation results are presented. The corrections of the TM data to a map reference and to a cartographic database are discussed; geometric and cartographic analyses are applied to the registration results.
Rice proteome database: a step toward functional analysis of the rice genome.
Komatsu, Setsuko
2005-09-01
The technique of proteome analysis using two-dimensional polyacrylamide gel electrophoresis (2D-PAGE) has the power to monitor global changes that occur in the protein complement of tissues and subcellular compartments. In this study, the proteins of rice were cataloged, a rice proteome database was constructed, and a functional characterization of some of the identified proteins was undertaken. Proteins extracted from various tissues and subcellular compartments in rice were separated by 2D-PAGE and an image analyzer was used to construct a display of the proteins. The Rice Proteome Database contains 23 reference maps based on 2D-PAGE of proteins from various rice tissues and subcellular compartments. These reference maps comprise 13129 identified proteins, and the amino acid sequences of 5092 proteins are entered in the database. Major proteins involved in growth or stress responses were identified using the proteome approach. Some of these proteins, including a beta-tubulin, calreticulin, and ribulose-1,5-bisphosphate carboxylase/oxygenase activase in rice, have unexpected functions. The information obtained from the Rice Proteome Database will aid in cloning the genes for and predicting the function of unknown proteins.
Schlaeger, Sarah; Freitag, Friedemann; Klupp, Elisabeth; Dieckmeyer, Michael; Weidlich, Dominik; Inhuber, Stephanie; Deschauer, Marcus; Schoser, Benedikt; Bublitz, Sarah; Montagnese, Federica; Zimmer, Claus; Rummeny, Ernst J; Karampinos, Dimitrios C; Kirschke, Jan S; Baum, Thomas
2018-01-01
Magnetic resonance imaging (MRI) can non-invasively assess muscle anatomy, exercise effects and pathologies with different underlying causes such as neuromuscular diseases (NMD). Quantitative MRI including fat fraction mapping using chemical shift encoding-based water-fat MRI has emerged for reliable determination of muscle volume and fat composition. The data analysis of water-fat images requires segmentation of the different muscles which has been mainly performed manually in the past and is a very time consuming process, currently limiting the clinical applicability. An automatization of the segmentation process would lead to a more time-efficient analysis. In the present work, the manually segmented thigh magnetic resonance imaging database MyoSegmenTUM is presented. It hosts water-fat MR images of both thighs of 15 healthy subjects and 4 patients with NMD with a voxel size of 3.2x2x4 mm3 with the corresponding segmentation masks for four functional muscle groups: quadriceps femoris, sartorius, gracilis, hamstrings. The database is freely accessible online at https://osf.io/svwa7/?view_only=c2c980c17b3a40fca35d088a3cdd83e2. The database is mainly meant as ground truth which can be used as training and test dataset for automatic muscle segmentation algorithms. The segmentation allows extraction of muscle cross sectional area (CSA) and volume. Proton density fat fraction (PDFF) of the defined muscle groups from the corresponding images and quadriceps muscle strength measurements/neurological muscle strength rating can be used for benchmarking purposes.
Golestaneh, S Alireza; Karam, Lina
2016-08-24
Perceptual image quality assessment (IQA) attempts to use computational models to estimate the image quality in accordance with subjective evaluations. Reduced-reference (RR) image quality assessment (IQA) methods make use of partial information or features extracted from the reference image for estimating the quality of distorted images. Finding a balance between the number of RR features and accuracy of the estimated image quality is essential and important in IQA. In this paper we propose a training-free low-cost RRIQA method that requires a very small number of RR features (6 RR features). The proposed RRIQA algorithm is based on the discrete wavelet transform (DWT) of locally weighted gradient magnitudes.We apply human visual system's contrast sensitivity and neighborhood gradient information to weight the gradient magnitudes in a locally adaptive manner. The RR features are computed by measuring the entropy of each DWT subband, for each scale, and pooling the subband entropies along all orientations, resulting in L RR features (one average entropy per scale) for an L-level DWT. Extensive experiments performed on seven large-scale benchmark databases demonstrate that the proposed RRIQA method delivers highly competitive performance as compared to the state-of-the-art RRIQA models as well as full reference ones for both natural and texture images. The MATLAB source code of REDLOG and the evaluation results are publicly available online at https://http://lab.engineering.asu.edu/ivulab/software/redlog/.
Brabb, Earl E.; Roberts, Sebastian; Cotton, William R.; Kropp, Alan L.; Wright, Robert H.; Zinn, Erik N.; Digital database by Roberts, Sebastian; Mills, Suzanne K.; Barnes, Jason B.; Marsolek, Joanna E.
2000-01-01
This publication consists of a digital map database on a geohazards web site, http://kaibab.wr.usgs.gov/geohazweb/intro.htm, this text, and 43 digital map images available for downloading at this site. The report is stored as several digital files, in ARC export (uncompressed) format for the database, and Postscript and PDF formats for the map images. Several of the source data layers for the images have already been released in other publications by the USGS and are available for downloading on the Internet. These source layers are not included in this digital database, but rather a reference is given for the web site where the data can be found in digital format. The exported ARC coverages and grids lie in UTM zone 10 projection. The pamphlet, which only describes the content and character of the digital map database, is included as Postscript, PDF, and ASCII text files and is also available on paper as USGS Open-File Report 00-127. The full versatility of the spatial database is realized by importing the ARC export files into ARC/INFO or an equivalent GIS. Other GIS packages, including MapInfo and ARCVIEW, can also use the ARC export files. The Postscript map image can be used for viewing or plotting in computer systems with sufficient capacity, and the considerably smaller PDF image files can be viewed or plotted in full or in part from Adobe ACROBAT software running on Macintosh, PC, or UNIX platforms.
Rinaldi, Pasquale; Barca, Laura; Burani, Cristina
2004-08-01
The CFVlexvar.xls database includes imageability, frequency, and grammatical properties of the first words acquired by Italian children. For each of 519 words that are known by children 18-30 months of age (taken from Caselli & Casadio's, 1995, Italian version of the MacArthur Communicative Development Inventory), new values of imageability are provided and values for age of acquisition, child written frequency, and adult written and spoken frequency are included. In this article, correlations among the variables are discussed and the words are grouped into grammatical categories. The results show that words acquired early have imageable referents, are frequently used in the texts read and written by elementary school children, and are frequent in adult written and spoken language. Nouns are acquired earlier and are more imageable than both verbs and adjectives. The composition in grammatical categories of the child's first vocabulary reflects the composition of adult vocabulary. The full set of these norms can be downloaded from www.psychonomic.org/archive/.
Kim, Ki Hwan; Do, Won-Joon; Park, Sung-Hong
2018-05-04
The routine MRI scan protocol consists of multiple pulse sequences that acquire images of varying contrast. Since high frequency contents such as edges are not significantly affected by image contrast, down-sampled images in one contrast may be improved by high resolution (HR) images acquired in another contrast, reducing the total scan time. In this study, we propose a new deep learning framework that uses HR MR images in one contrast to generate HR MR images from highly down-sampled MR images in another contrast. The proposed convolutional neural network (CNN) framework consists of two CNNs: (a) a reconstruction CNN for generating HR images from the down-sampled images using HR images acquired with a different MRI sequence and (b) a discriminator CNN for improving the perceptual quality of the generated HR images. The proposed method was evaluated using a public brain tumor database and in vivo datasets. The performance of the proposed method was assessed in tumor and no-tumor cases separately, with perceptual image quality being judged by a radiologist. To overcome the challenge of training the network with a small number of available in vivo datasets, the network was pretrained using the public database and then fine-tuned using the small number of in vivo datasets. The performance of the proposed method was also compared to that of several compressed sensing (CS) algorithms. Incorporating HR images of another contrast improved the quantitative assessments of the generated HR image in reference to ground truth. Also, incorporating a discriminator CNN yielded perceptually higher image quality. These results were verified in regions of normal tissue as well as tumors for various MRI sequences from pseudo k-space data generated from the public database. The combination of pretraining with the public database and fine-tuning with the small number of real k-space datasets enhanced the performance of CNNs in in vivo application compared to training CNNs from scratch. The proposed method outperformed the compressed sensing methods. The proposed method can be a good strategy for accelerating routine MRI scanning. © 2018 American Association of Physicists in Medicine.
Design of a web portal for interdisciplinary image retrieval from multiple online image resources.
Kammerer, F J; Frankewitsch, T; Prokosch, H-U
2009-01-01
Images play an important role in medicine. Finding the desired images within the multitude of online image databases is a time-consuming and frustrating process. Existing websites do not meet all the requirements for an ideal learning environment for medical students. This work intends to establish a new web portal providing a centralized access point to a selected number of online image databases. A back-end system locates images on given websites and extracts relevant metadata. The images are indexed using UMLS and the MetaMap system provided by the US National Library of Medicine. Specially developed functions allow to create individual navigation structures. The front-end system suits the specific needs of medical students. A navigation structure consisting of several medical fields, university curricula and the ICD-10 was created. The images may be accessed via the given navigation structure or using different search functions. Cross-references are provided by the semantic relations of the UMLS. Over 25,000 images were identified and indexed. A pilot evaluation among medical students showed good first results concerning the acceptance of the developed navigation structures and search features. The integration of the images from different sources into the UMLS semantic network offers a quick and an easy-to-use learning environment.
36 CFR § 1225.24 - When can an agency apply previously approved schedules to electronic records?
Code of Federal Regulations, 2013 CFR
2013-07-01
... Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT SCHEDULING RECORDS § 1225.24 When... must notify the National Archives and Records Administration, Modern Records Programs (NWM), 8601... authority reference; and (v) Format of the records (e.g., database, scanned images, digital photographs, etc...
No-Reference Image Quality Assessment by Wide-Perceptual-Domain Scorer Ensemble Method.
Liu, Tsung-Jung; Liu, Kuan-Hsien
2018-03-01
A no-reference (NR) learning-based approach to assess image quality is presented in this paper. The devised features are extracted from wide perceptual domains, including brightness, contrast, color, distortion, and texture. These features are used to train a model (scorer) which can predict scores. The scorer selection algorithms are utilized to help simplify the proposed system. In the final stage, the ensemble method is used to combine the prediction results from selected scorers. Two multiple-scale versions of the proposed approach are also presented along with the single-scale one. They turn out to have better performances than the original single-scale method. Because of having features from five different domains at multiple image scales and using the outputs (scores) from selected score prediction models as features for multi-scale or cross-scale fusion (i.e., ensemble), the proposed NR image quality assessment models are robust with respect to more than 24 image distortion types. They also can be used on the evaluation of images with authentic distortions. The extensive experiments on three well-known and representative databases confirm the performance robustness of our proposed model.
[Design and development of an online system of parasite's images for training and evaluation].
Yuan-Chun, Mao; Sui, Xu; Jie, Wang; Hua-Yun, Zhou; Jun, Cao
2017-08-08
To design and develop an online training and evaluation system for parasitic pathogen recognition. The system was based on a Parasitic Diseases Specimen Image Digitization Construction Database by using MYSQL 5.0 as the system of database development software, and PHP 5 as the interface development language. It was mainly used for online training and evaluation of parasitic pathology diagnostic techniques. The system interface was designed simple, flexible, and easy to operate for medical staff. It enabled full day and 24 hours accessible to online training study and evaluation. Thus, the system broke the time and space constraints of the traditional training models. The system provides a shared platform for the professional training of parasitic diseases, and a reference for other training tasks.
NASA Astrophysics Data System (ADS)
Babayan, Pavel; Smirnov, Sergey; Strotov, Valery
2017-10-01
This paper describes the aerial object recognition algorithm for on-board and stationary vision system. Suggested algorithm is intended to recognize the objects of a specific kind using the set of the reference objects defined by 3D models. The proposed algorithm based on the outer contour descriptor building. The algorithm consists of two stages: learning and recognition. Learning stage is devoted to the exploring of reference objects. Using 3D models we can build the database containing training images by rendering the 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. Gathered training image set is used for calculating descriptors, which will be used in the recognition stage of the algorithm. The recognition stage is focusing on estimating the similarity of the captured object and the reference objects by matching an observed image descriptor and the reference object descriptors. The experimental research was performed using a set of the models of the aircraft of the different types (airplanes, helicopters, UAVs). The proposed orientation estimation algorithm showed good accuracy in all case studies. The real-time performance of the algorithm in FPGA-based vision system was demonstrated.
Physiology-based face recognition in the thermal infrared spectrum.
Buddharaju, Pradeep; Pavlidis, Ioannis T; Tsiamyrtzis, Panagiotis; Bazakos, Mike
2007-04-01
The current dominant approaches to face recognition rely on facial characteristics that are on or over the skin. Some of these characteristics have low permanency can be altered, and their phenomenology varies significantly with environmental factors (e.g., lighting). Many methodologies have been developed to address these problems to various degrees. However, the current framework of face recognition research has a potential weakness due to its very nature. We present a novel framework for face recognition based on physiological information. The motivation behind this effort is to capitalize on the permanency of innate characteristics that are under the skin. To establish feasibility, we propose a specific methodology to capture facial physiological patterns using the bioheat information contained in thermal imagery. First, the algorithm delineates the human face from the background using the Bayesian framework. Then, it localizes the superficial blood vessel network using image morphology. The extracted vascular network produces contour shapes that are characteristic to each individual. The branching points of the skeletonized vascular network are referred to as Thermal Minutia Points (TMPs) and constitute the feature database. To render the method robust to facial pose variations, we collect for each subject to be stored in the database five different pose images (center, midleft profile, left profile, midright profile, and right profile). During the classification stage, the algorithm first estimates the pose of the test image. Then, it matches the local and global TMP structures extracted from the test image with those of the corresponding pose images in the database. We have conducted experiments on a multipose database of thermal facial images collected in our laboratory, as well as on the time-gap database of the University of Notre Dame. The good experimental results show that the proposed methodology has merit, especially with respect to the problem of low permanence over time. More importantly, the results demonstrate the feasibility of the physiological framework in face recognition and open the way for further methodological and experimental research in the area.
The Papillomavirus Episteme: a major update to the papillomavirus sequence database.
Van Doorslaer, Koenraad; Li, Zhiwen; Xirasagar, Sandhya; Maes, Piet; Kaminsky, David; Liou, David; Sun, Qiang; Kaur, Ramandeep; Huyen, Yentram; McBride, Alison A
2017-01-04
The Papillomavirus Episteme (PaVE) is a database of curated papillomavirus genomic sequences, accompanied by web-based sequence analysis tools. This update describes the addition of major new features. The papillomavirus genomes within PaVE have been further annotated, and now includes the major spliced mRNA transcripts. Viral genes and transcripts can be visualized on both linear and circular genome browsers. Evolutionary relationships among PaVE reference protein sequences can be analysed using multiple sequence alignments and phylogenetic trees. To assist in viral discovery, PaVE offers a typing tool; a simplified algorithm to determine whether a newly sequenced virus is novel. PaVE also now contains an image library containing gross clinical and histopathological images of papillomavirus infected lesions. Database URL: https://pave.niaid.nih.gov/. Published by Oxford University Press on behalf of Nucleic Acids Research 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.
NASA Astrophysics Data System (ADS)
Stewart, Brent K.; Langer, Steven G.; Martin, Kelly P.
1999-07-01
The purpose of this paper is to integrate multiple DICOM image webservers into the currently existing enterprises- wide web-browsable electronic medical record. Over the last six years the University of Washington has created a clinical data repository combining in a distributed relational database information from multiple departmental databases (MIND). A character cell-based view of this data called the Mini Medical Record (MMR) has been available for four years, MINDscape, unlike the text-based MMR. provides a platform independent, dynamic, web browser view of the MIND database that can be easily linked with medical knowledge resources on the network, like PubMed and the Federated Drug Reference. There are over 10,000 MINDscape user accounts at the University of Washington Academic Medical Centers. The weekday average number of hits to MINDscape is 35,302 and weekday average number of individual users is 1252. DICOM images from multiple webservers are now being viewed through the MINDscape electronic medical record.
NASA Astrophysics Data System (ADS)
Castillo, Richard; Castillo, Edward; Fuentes, David; Ahmad, Moiz; Wood, Abbie M.; Ludwig, Michelle S.; Guerrero, Thomas
2013-05-01
Landmark point-pairs provide a strategy to assess deformable image registration (DIR) accuracy in terms of the spatial registration of the underlying anatomy depicted in medical images. In this study, we propose to augment a publicly available database (www.dir-lab.com) of medical images with large sets of manually identified anatomic feature pairs between breath-hold computed tomography (BH-CT) images for DIR spatial accuracy evaluation. Ten BH-CT image pairs were randomly selected from the COPDgene study cases. Each patient had received CT imaging of the entire thorax in the supine position at one-fourth dose normal expiration and maximum effort full dose inspiration. Using dedicated in-house software, an imaging expert manually identified large sets of anatomic feature pairs between images. Estimates of inter- and intra-observer spatial variation in feature localization were determined by repeat measurements of multiple observers over subsets of randomly selected features. 7298 anatomic landmark features were manually paired between the 10 sets of images. Quantity of feature pairs per case ranged from 447 to 1172. Average 3D Euclidean landmark displacements varied substantially among cases, ranging from 12.29 (SD: 6.39) to 30.90 (SD: 14.05) mm. Repeat registration of uniformly sampled subsets of 150 landmarks for each case yielded estimates of observer localization error, which ranged in average from 0.58 (SD: 0.87) to 1.06 (SD: 2.38) mm for each case. The additions to the online web database (www.dir-lab.com) described in this work will broaden the applicability of the reference data, providing a freely available common dataset for targeted critical evaluation of DIR spatial accuracy performance in multiple clinical settings. Estimates of observer variance in feature localization suggest consistent spatial accuracy for all observers across both four-dimensional CT and COPDgene patient cohorts.
ESIM: Edge Similarity for Screen Content Image Quality Assessment.
Ni, Zhangkai; Ma, Lin; Zeng, Huanqiang; Chen, Jing; Cai, Canhui; Ma, Kai-Kuang
2017-10-01
In this paper, an accurate full-reference image quality assessment (IQA) model developed for assessing screen content images (SCIs), called the edge similarity (ESIM), is proposed. It is inspired by the fact that the human visual system (HVS) is highly sensitive to edges that are often encountered in SCIs; therefore, essential edge features are extracted and exploited for conducting IQA for the SCIs. The key novelty of the proposed ESIM lies in the extraction and use of three salient edge features-i.e., edge contrast, edge width, and edge direction. The first two attributes are simultaneously generated from the input SCI based on a parametric edge model, while the last one is derived directly from the input SCI. The extraction of these three features will be performed for the reference SCI and the distorted SCI, individually. The degree of similarity measured for each above-mentioned edge attribute is then computed independently, followed by combining them together using our proposed edge-width pooling strategy to generate the final ESIM score. To conduct the performance evaluation of our proposed ESIM model, a new and the largest SCI database (denoted as SCID) is established in our work and made to the public for download. Our database contains 1800 distorted SCIs that are generated from 40 reference SCIs. For each SCI, nine distortion types are investigated, and five degradation levels are produced for each distortion type. Extensive simulation results have clearly shown that the proposed ESIM model is more consistent with the perception of the HVS on the evaluation of distorted SCIs than the multiple state-of-the-art IQA methods.
Database of Geoscientific References Through 2007 for Afghanistan, Version 2
Eppinger, Robert G.; Sipeki, Julianna; Scofield, M.L. Sco
2007-01-01
This report describes an accompanying database of geoscientific references for the country of Afghanistan. Included is an accompanying Microsoft? Access 2003 database of geoscientific references for the country of Afghanistan. The reference compilation is part of a larger joint study of Afghanistan's energy, mineral, and water resources, and geologic hazards, currently underway by the U.S. Geological Survey, the British Geological Survey, and the Afghanistan Geological Survey. The database includes both published (n = 2,462) and unpublished (n = 174) references compiled through September, 2007. The references comprise two separate tables in the Access database. The reference database includes a user-friendly, keyword-searchable, interface and only minimum knowledge of the use of Microsoft? Access is required.
Casimage project: a digital teaching files authoring environment.
Rosset, Antoine; Muller, Henning; Martins, Martina; Dfouni, Natalia; Vallée, Jean-Paul; Ratib, Osman
2004-04-01
The goal of the Casimage project is to offer an authoring and editing environment integrated with the Picture Archiving and Communication Systems (PACS) for creating image-based electronic teaching files. This software is based on a client/server architecture allowing remote access of users to a central database. This authoring environment allows radiologists to create reference databases and collection of digital images for teaching and research directly from clinical cases being reviewed on PACS diagnostic workstations. The environment includes all tools to create teaching files, including textual description, annotations, and image manipulation. The software also allows users to generate stand-alone CD-ROMs and web-based teaching files to easily share their collections. The system includes a web server compatible with the Medical Imaging Resource Center standard (MIRC, http://mirc.rsna.org) to easily integrate collections in the RSNA web network dedicated to teaching files. This software could be installed on any PACS workstation to allow users to add new cases at any time and anywhere during clinical operations. Several images collections were created with this tool, including thoracic imaging that was subsequently made available on a CD-Rom and on our web site and through the MIRC network for public access.
Automatic Image Processing Workflow for the Keck/NIRC2 Vortex Coronagraph
NASA Astrophysics Data System (ADS)
Xuan, Wenhao; Cook, Therese; Ngo, Henry; Zawol, Zoe; Ruane, Garreth; Mawet, Dimitri
2018-01-01
The Keck/NIRC2 camera, equipped with the vortex coronagraph, is an instrument targeted at the high contrast imaging of extrasolar planets. To uncover a faint planet signal from the overwhelming starlight, we utilize the Vortex Image Processing (VIP) library, which carries out principal component analysis to model and remove the stellar point spread function. To bridge the gap between data acquisition and data reduction, we implement a workflow that 1) downloads, sorts, and processes data with VIP, 2) stores the analysis products into a database, and 3) displays the reduced images, contrast curves, and auxiliary information on a web interface. Both angular differential imaging and reference star differential imaging are implemented in the analysis module. A real-time version of the workflow runs during observations, allowing observers to make educated decisions about time distribution on different targets, hence optimizing science yield. The post-night version performs a standardized reduction after the observation, building up a valuable database that not only helps uncover new discoveries, but also enables a statistical study of the instrument itself. We present the workflow, and an examination of the contrast performance of the NIRC2 vortex with respect to factors including target star properties and observing conditions.
PMAG: Relational Database Definition
NASA Astrophysics Data System (ADS)
Keizer, P.; Koppers, A.; Tauxe, L.; Constable, C.; Genevey, A.; Staudigel, H.; Helly, J.
2002-12-01
The Scripps center for Physical and Chemical Earth References (PACER) was established to help create databases for reference data and make them available to the Earth science community. As part of these efforts PACER supports GERM, REM and PMAG and maintains multiple online databases under the http://earthref.org umbrella website. This website has been built on top of a relational database that allows for the archiving and electronic access to a great variety of data types and formats, permitting data queries using a wide range of metadata. These online databases are designed in Oracle 8.1.5 and they are maintained at the San Diego Supercomputer Center. They are directly available via http://earthref.org/databases/. A prototype of the PMAG relational database is now operational within the existing EarthRef.org framework under http://earthref.org/databases/PMAG/. As will be shown in our presentation, the PMAG design focuses around the general workflow that results in the determination of typical paleo-magnetic analyses. This ensures that individual data points can be traced between the actual analysis and the specimen, sample, site, locality and expedition it belongs to. These relations guarantee traceability of the data by distinguishing between original and derived data, where the actual (raw) measurements are performed on the specimen level, and data on the sample level and higher are then derived products in the database. These relations may also serve to recalculate site means when new data becomes available for that locality. The PMAG data records are extensively described in terms of metadata. These metadata are used when scientists search through this online database in order to view and download their needed data. They minimally include method descriptions for field sampling, laboratory techniques and statistical analyses. They also include selection criteria used during the interpretation of the data and, most importantly, critical information about the site location (latitude, longitude, elevation), geography (continent, country, region), geological setting (lithospheric plate or block, tectonic setting), geological age (age range, timescale name, stratigraphic position) and materials (rock type, classification, alteration state). Each data point and method description is also related to its peer-reviewed reference [citation ID] as archived in the EarthRef Reference Database (ERR). This guarantees direct traceability all the way to its original source, where the user can find the bibliography of each PMAG reference along with every abstract, data table, technical note and/or appendix that are available in digital form and that can be downloaded as PDF/JPEG images and Microsoft Excel/Word data files. This may help scientists and teachers in performing their research since they have easy access to all the scientific data. It also allows for checking potential errors during the digitization process. Please visit the PMAG website at http://earthref.org/PMAG/ for more information.
Automated cloud and shadow detection and filling using two-date Landsat imagery in the United States
Jin, Suming; Homer, Collin G.; Yang, Limin; Xian, George; Fry, Joyce; Danielson, Patrick; Townsend, Philip A.
2013-01-01
A simple, efficient, and practical approach for detecting cloud and shadow areas in satellite imagery and restoring them with clean pixel values has been developed. Cloud and shadow areas are detected using spectral information from the blue, shortwave infrared, and thermal infrared bands of Landsat Thematic Mapper or Enhanced Thematic Mapper Plus imagery from two dates (a target image and a reference image). These detected cloud and shadow areas are further refined using an integration process and a false shadow removal process according to the geometric relationship between cloud and shadow. Cloud and shadow filling is based on the concept of the Spectral Similarity Group (SSG), which uses the reference image to find similar alternative pixels in the target image to serve as replacement values for restored areas. Pixels are considered to belong to one SSG if the pixel values from Landsat bands 3, 4, and 5 in the reference image are within the same spectral ranges. This new approach was applied to five Landsat path/rows across different landscapes and seasons with various types of cloud patterns. Results show that almost all of the clouds were captured with minimal commission errors, and shadows were detected reasonably well. Among five test scenes, the lowest producer's accuracy of cloud detection was 93.9% and the lowest user's accuracy was 89%. The overall cloud and shadow detection accuracy ranged from 83.6% to 99.3%. The pixel-filling approach resulted in a new cloud-free image that appears seamless and spatially continuous despite differences in phenology between the target and reference images. Our methods offer a straightforward and robust approach for preparing images for the new 2011 National Land Cover Database production.
The FBI compression standard for digitized fingerprint images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.
1996-10-01
The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the currentmore » status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.« less
FBI compression standard for digitized fingerprint images
NASA Astrophysics Data System (ADS)
Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas
1996-11-01
The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.
Reiner, Bruce
2015-06-01
One of the greatest challenges facing healthcare professionals is the ability to directly and efficiently access relevant data from the patient's healthcare record at the point of care; specific to both the context of the task being performed and the specific needs and preferences of the individual end-user. In radiology practice, the relative inefficiency of imaging data organization and manual workflow requirements serves as an impediment to historical imaging data review. At the same time, clinical data retrieval is even more problematic due to the quality and quantity of data recorded at the time of order entry, along with the relative lack of information system integration. One approach to address these data deficiencies is to create a multi-disciplinary patient referenceable database which consists of high-priority, actionable data within the cumulative patient healthcare record; in which predefined criteria are used to categorize and classify imaging and clinical data in accordance with anatomy, technology, pathology, and time. The population of this referenceable database can be performed through a combination of manual and automated methods, with an additional step of data verification introduced for data quality control. Once created, these referenceable databases can be filtered at the point of care to provide context and user-specific data specific to the task being performed and individual end-user requirements.
3D visualization of molecular structures in the MOGADOC database
NASA Astrophysics Data System (ADS)
Vogt, Natalja; Popov, Evgeny; Rudert, Rainer; Kramer, Rüdiger; Vogt, Jürgen
2010-08-01
The MOGADOC database (Molecular Gas-Phase Documentation) is a powerful tool to retrieve information about compounds which have been studied in the gas-phase by electron diffraction, microwave spectroscopy and molecular radio astronomy. Presently the database contains over 34,500 bibliographic references (from the beginning of each method) for about 10,000 inorganic, organic and organometallic compounds and structural data (bond lengths, bond angles, dihedral angles, etc.) for about 7800 compounds. Most of the implemented molecular structures are given in a three-dimensional (3D) presentation. To create or edit and visualize the 3D images of molecules, new tools (special editor and Java-based 3D applet) were developed. Molecular structures in internal coordinates were converted to those in Cartesian coordinates.
Seasonal Course of Boreal Forest Reflectance
NASA Astrophysics Data System (ADS)
Rautiainen, M.; Heiskanen, J.; Mottus, M.; Eigemeier, E.; Majasalmi, T.; Vesanto, V.; Stenberg, P.
2011-12-01
According to the IPCC 2007 report, northern ecosystems are especially likely to be affected by climate change. Therefore, understanding the seasonal dynamics of boreal ecosystems and linking their phenological phases to satellite reflectance data is crucial for the efficient monitoring and modeling of northern hemisphere vegetation dynamics and productivity trends in the future. The seasonal reflectance course of a boreal forest is a result of the temporal cycle in optical properties of both the tree canopy and understory layers. Seasonal reflectance changes of the two layers are explained by the complex combination of changes in biochemistry and geometrical structure of different plant species as well as seasonal and diurnal variation in solar illumination. Analyzing the role of each of the contributing factors can only be achieved by linking radiative transfer modeling to empirical reflectance data sets. The aim of our project is to identify the seasonal reflectance changes and their driving factors in boreal forests from optical satellite images using new forest reflectance modeling techniques based on the spectral invariants theory. We have measured an extensive ground reference database on the seasonal changes of structural and optical properties of tree canopy and understory layers for a boreal forest site in central Finland in 2010. The database is complemented by a concurrent time series of Hyperion and SPOT satellite images. We use the empirical ground reference database as input to forest reflectance simulations and validate our simulation results using the empirical reflectance data obtained from satellite images. Based on our simulation results, we quantify 1) the driving factors influencing the seasonal reflectance courses of a boreal forest, and 2) the relative contribution of the understory and tree-level layers to forest reflectance as the growing season proceeds.
Structural model constructing for optical handwritten character recognition
NASA Astrophysics Data System (ADS)
Khaustov, P. A.; Spitsyn, V. G.; Maksimova, E. I.
2017-02-01
The article is devoted to the development of the algorithms for optical handwritten character recognition based on the structural models constructing. The main advantage of these algorithms is the low requirement regarding the number of reference images. The one-pass approach to a thinning of the binary character representation has been proposed. This approach is based on the joint use of Zhang-Suen and Wu-Tsai algorithms. The effectiveness of the proposed approach is confirmed by the results of the experiments. The article includes the detailed description of the structural model constructing algorithm’s steps. The proposed algorithm has been implemented in character processing application and has been approved on MNIST handwriting characters database. Algorithms that could be used in case of limited reference images number were used for the comparison.
Automatic localization of the nipple in mammograms using Gabor filters and the Radon transform
NASA Astrophysics Data System (ADS)
Chakraborty, Jayasree; Mukhopadhyay, Sudipta; Rangayyan, Rangaraj M.; Sadhu, Anup; Azevedo-Marques, P. M.
2013-02-01
The nipple is an important landmark in mammograms. Detection of the nipple is useful for alignment and registration of mammograms in computer-aided diagnosis of breast cancer. In this paper, a novel approach is proposed for automatic detection of the nipple based on the oriented patterns of the breast tissues present in mammograms. The Radon transform is applied to the oriented patterns obtained by a bank of Gabor filters to detect the linear structures related to the tissue patterns. The detected linear structures are then used to locate the nipple position using the characteristics of convergence of the tissue patterns towards the nipple. The performance of the method was evaluated with 200 scanned-film images from the mini-MIAS database and 150 digital radiography (DR) images from a local database. Average errors of 5:84 mm and 6:36 mm were obtained with respect to the reference nipple location marked by a radiologist for the mini-MIAS and the DR images, respectively.
Frampton, G K; Kalita, N; Payne, L; Colquitt, J L; Loveman, E; Downes, S M; Lotery, A J
2017-07-01
We conducted a systematic review of the accuracy of fundus autofluorescence (FAF) imaging for diagnosing and monitoring retinal conditions. Searches in November 2014 identified English language references. Sources included MEDLINE, EMBASE, the Cochrane Library, Web of Science, and MEDION databases; reference lists of retrieved studies; and internet pages of relevant organisations, meetings, and trial registries. For inclusion, studies had to report FAF imaging accuracy quantitatively. Studies were critically appraised using QUADAS risk of bias criteria. Two reviewers conducted all review steps. From 2240 unique references identified, eight primary research studies met the inclusion criteria. These investigated diagnostic accuracy of FAF imaging for choroidal neovascularisation (one study), reticular pseudodrusen (three studies), cystoid macular oedema (two studies), and diabetic macular oedema (two studies). Diagnostic sensitivity of FAF imaging ranged from 32 to 100% and specificity from 34 to 100%. However, owing to methodological limitations, including high and/or unclear risks of bias, none of these studies provides conclusive evidence of the diagnostic accuracy of FAF imaging. Study heterogeneity precluded meta-analysis. In most studies, the patient spectrum was not reflective of those who would present in clinical practice and no studies adequately reported whether FAF images were interpreted consistently. No studies of monitoring accuracy were identified. An update in October 2016, based on MEDLINE and internet searches, identified four new studies but did not alter our conclusions. Robust quantitative evidence on the accuracy of FAF imaging and how FAF images are interpreted is lacking. We provide recommendations to address this.
Learning to rank for blind image quality assessment.
Gao, Fei; Tao, Dacheng; Gao, Xinbo; Li, Xuelong
2015-10-01
Blind image quality assessment (BIQA) aims to predict perceptual image quality scores without access to reference images. State-of-the-art BIQA methods typically require subjects to score a large number of images to train a robust model. However, subjective quality scores are imprecise, biased, and inconsistent, and it is challenging to obtain a large-scale database, or to extend existing databases, because of the inconvenience of collecting images, training the subjects, conducting subjective experiments, and realigning human quality evaluations. To combat these limitations, this paper explores and exploits preference image pairs (PIPs) such as the quality of image Ia is better than that of image Ib for training a robust BIQA model. The preference label, representing the relative quality of two images, is generally precise and consistent, and is not sensitive to image content, distortion type, or subject identity; such PIPs can be generated at a very low cost. The proposed BIQA method is one of learning to rank. We first formulate the problem of learning the mapping from the image features to the preference label as one of classification. In particular, we investigate the utilization of a multiple kernel learning algorithm based on group lasso to provide a solution. A simple but effective strategy to estimate perceptual image quality scores is then presented. Experiments show that the proposed BIQA method is highly effective and achieves a performance comparable with that of state-of-the-art BIQA algorithms. Moreover, the proposed method can be easily extended to new distortion categories.
NASA Astrophysics Data System (ADS)
Gålfalk, Magnus; Karlson, Martin; Crill, Patrick; Bousquet, Philippe; Bastviken, David
2018-03-01
The calibration and validation of remote sensing land cover products are highly dependent on accurate field reference data, which are costly and practically challenging to collect. We describe an optical method for collection of field reference data that is a fast, cost-efficient, and robust alternative to field surveys and UAV imaging. A lightweight, waterproof, remote-controlled RGB camera (GoPro HERO4 Silver, GoPro Inc.) was used to take wide-angle images from 3.1 to 4.5 m in altitude using an extendable monopod, as well as representative near-ground (< 1 m) images to identify spectral and structural features that correspond to various land covers in present lighting conditions. A semi-automatic classification was made based on six surface types (graminoids, water, shrubs, dry moss, wet moss, and rock). The method enables collection of detailed field reference data, which is critical in many remote sensing applications, such as satellite-based wetland mapping. The method uses common non-expensive equipment, does not require special skills or training, and is facilitated by a step-by-step manual that is included in the Supplement. Over time a global ground cover database can be built that can be used as reference data for studies of non-forested wetlands from satellites such as Sentinel 1 and 2 (10 m pixel size).
NASA Astrophysics Data System (ADS)
Guerrout, EL-Hachemi; Ait-Aoudia, Samy; Michelucci, Dominique; Mahiou, Ramdane
2018-05-01
Many routine medical examinations produce images of patients suffering from various pathologies. With the huge number of medical images, the manual analysis and interpretation became a tedious task. Thus, automatic image segmentation became essential for diagnosis assistance. Segmentation consists in dividing the image into homogeneous and significant regions. We focus on hidden Markov random fields referred to as HMRF to model the problem of segmentation. This modelisation leads to a classical function minimisation problem. Broyden-Fletcher-Goldfarb-Shanno algorithm referred to as BFGS is one of the most powerful methods to solve unconstrained optimisation problem. In this paper, we investigate the combination of HMRF and BFGS algorithm to perform the segmentation operation. The proposed method shows very good segmentation results comparing with well-known approaches. The tests are conducted on brain magnetic resonance image databases (BrainWeb and IBSR) largely used to objectively confront the results obtained. The well-known Dice coefficient (DC) was used as similarity metric. The experimental results show that, in many cases, our proposed method approaches the perfect segmentation with a Dice Coefficient above .9. Moreover, it generally outperforms other methods in the tests conducted.
Assessing telemedicine: a systematic review of the literature.
Roine, R; Ohinmaa, A; Hailey, D
2001-09-18
To clarify the current status of telemedicine, we carried out a systematic review of the literature. We identified controlled assessment studies of telemedicine that reported patient outcomes, administrative changes or economic assessments and assessed the quality of that literature. We carried out a systematic electronic search for articles published from 1966 to early 2000 using the MEDLINE (1966-April 2000), HEALTHSTAR (1975-January 2000), EMBASE (1988-February 2000) and CINALH (1982-January 2000) databases. In addition, the HSTAT database (Health Services/Technology Assessment Text, US National Library of Medicine), the Database of Abstracts of Reviews of Effectiveness (DARE, NHS Centre for Reviews and Dissemination, United Kingdom), the NHS Economic Evaluation Database and the Cochrane Controlled Trials Register were searched. We consulted experts in the field and did a manual search of the reference lists of review articles. A total of 1124 studies were identified. Based on a review of the abstracts, 133 full-text articles were obtained for closer inspection. Of these, 50 were deemed to represent assessment studies fulfilling the inclusion criteria of the review. Thirty-four of the articles assessed at least some clinical outcomes; the remaining 16 were mainly economic analyses. Most of the available literature referred only to pilot projects and short-term outcomes, and most of the studies were of low quality. Relatively convincing evidence of effectiveness was found only for teleradiology, teleneurosurgery, telepsychiatry, transmission of echocardiographic images, and the use of electronic referrals enabling e-mail consultations and video conferencing between primary and secondary health care providers. Economic analyses suggested that teleradiology, especially transmission of CT images, can be cost-saving. Evidence regarding the effectiveness or cost-effectiveness of telemedicine is still limited. Based on current scientific evidence, only a few telemedicine applications can be recommended for broader use.
Sensors integration for smartphone navigation: performances and future challenges
NASA Astrophysics Data System (ADS)
Aicardi, I.; Dabove, P.; Lingua, A.; Piras, M.
2014-08-01
Nowadays the modern smartphones include several sensors which are usually adopted in geomatic application, as digital camera, GNSS (Global Navigation Satellite System) receivers, inertial platform, RFID and Wi-Fi systems. In this paper the authors would like to testing the performances of internal sensors (Inertial Measurement Unit, IMU) of three modern smartphones (Samsung GalaxyS4, Samsung GalaxyS5 and iPhone4) compared to external mass-market IMU platform in order to verify their accuracy levels, in terms of positioning. Moreover, the Image Based Navigation (IBN) approach is also investigated: this approach can be very useful in hard-urban environment or for indoor positioning, as alternative to GNSS positioning. IBN allows to obtain a sub-metrical accuracy, but a special database of georeferenced images (Image DataBase, IDB) is needed, moreover it is necessary to use dedicated algorithm to resizing the images which are collected by smartphone, in order to share it with the server where is stored the IDB. Moreover, it is necessary to characterize smartphone camera lens in terms of focal length and lens distortions. The authors have developed an innovative method with respect to those available today, which has been tested in a covered area, adopting a special support where all sensors under testing have been installed. Geomatic instrument have been used to define the reference trajectory, with purpose to compare this one, with the path obtained with IBN solution. First results leads to have an horizontal and vertical accuracies better than 60 cm, respect to the reference trajectories. IBN method, sensors, test and result will be described in the paper.
eCTG: an automatic procedure to extract digital cardiotocographic signals from digital images.
Sbrollini, Agnese; Agostinelli, Angela; Marcantoni, Ilaria; Morettini, Micaela; Burattini, Luca; Di Nardo, Francesco; Fioretti, Sandro; Burattini, Laura
2018-03-01
Cardiotocography (CTG), consisting in the simultaneous recording of fetal heart rate (FHR) and maternal uterine contractions (UC), is a popular clinical test to assess fetal health status. Typically, CTG machines provide paper reports that are visually interpreted by clinicians. Consequently, visual CTG interpretation depends on clinician's experience and has a poor reproducibility. The lack of databases containing digital CTG signals has limited number and importance of retrospective studies finalized to set up procedures for automatic CTG analysis that could contrast visual CTG interpretation subjectivity. In order to help overcoming this problem, this study proposes an electronic procedure, termed eCTG, to extract digital CTG signals from digital CTG images, possibly obtainable by scanning paper CTG reports. eCTG was specifically designed to extract digital CTG signals from digital CTG images. It includes four main steps: pre-processing, Otsu's global thresholding, signal extraction and signal calibration. Its validation was performed by means of the "CTU-UHB Intrapartum Cardiotocography Database" by Physionet, that contains digital signals of 552 CTG recordings. Using MATLAB, each signal was plotted and saved as a digital image that was then submitted to eCTG. Digital CTG signals extracted by eCTG were eventually compared to corresponding signals directly available in the database. Comparison occurred in terms of signal similarity (evaluated by the correlation coefficient ρ, and the mean signal error MSE) and clinical features (including FHR baseline and variability; number, amplitude and duration of tachycardia, bradycardia, acceleration and deceleration episodes; number of early, variable, late and prolonged decelerations; and UC number, amplitude, duration and period). The value of ρ between eCTG and reference signals was 0.85 (P < 10 -560 ) for FHR and 0.97 (P < 10 -560 ) for UC. On average, MSE value was 0.00 for both FHR and UC. No CTG feature was found significantly different when measured in eCTG vs. reference signals. eCTG procedure is a promising useful tool to accurately extract digital FHR and UC signals from digital CTG images. Copyright © 2018 Elsevier B.V. All rights reserved.
Eppinger, Robert G.; Sipeki, Julianna; Scofield, M.L. Sco
2008-01-01
This report includes a document and accompanying Microsoft Access 2003 database of geoscientific references for the country of Afghanistan. The reference compilation is part of a larger joint study of Afghanistan?s energy, mineral, and water resources, and geologic hazards currently underway by the U.S. Geological Survey, the British Geological Survey, and the Afghanistan Geological Survey. The database includes both published (n = 2,489) and unpublished (n = 176) references compiled through calendar year 2007. The references comprise two separate tables in the Access database. The reference database includes a user-friendly, keyword-searchable interface and only minimum knowledge of the use of Microsoft Access is required.
Color image definition evaluation method based on deep learning method
NASA Astrophysics Data System (ADS)
Liu, Di; Li, YingChun
2018-01-01
In order to evaluate different blurring levels of color image and improve the method of image definition evaluation, this paper proposed a method based on the depth learning framework and BP neural network classification model, and presents a non-reference color image clarity evaluation method. Firstly, using VGG16 net as the feature extractor to extract 4,096 dimensions features of the images, then the extracted features and labeled images are employed in BP neural network to train. And finally achieve the color image definition evaluation. The method in this paper are experimented by using images from the CSIQ database. The images are blurred at different levels. There are 4,000 images after the processing. Dividing the 4,000 images into three categories, each category represents a blur level. 300 out of 400 high-dimensional features are trained in VGG16 net and BP neural network, and the rest of 100 samples are tested. The experimental results show that the method can take full advantage of the learning and characterization capability of deep learning. Referring to the current shortcomings of the major existing image clarity evaluation methods, which manually design and extract features. The method in this paper can extract the images features automatically, and has got excellent image quality classification accuracy for the test data set. The accuracy rate is 96%. Moreover, the predicted quality levels of original color images are similar to the perception of the human visual system.
Learning Computational Models of Video Memorability from fMRI Brain Imaging.
Han, Junwei; Chen, Changyuan; Shao, Ling; Hu, Xintao; Han, Jungong; Liu, Tianming
2015-08-01
Generally, various visual media are unequally memorable by the human brain. This paper looks into a new direction of modeling the memorability of video clips and automatically predicting how memorable they are by learning from brain functional magnetic resonance imaging (fMRI). We propose a novel computational framework by integrating the power of low-level audiovisual features and brain activity decoding via fMRI. Initially, a user study experiment is performed to create a ground truth database for measuring video memorability and a set of effective low-level audiovisual features is examined in this database. Then, human subjects' brain fMRI data are obtained when they are watching the video clips. The fMRI-derived features that convey the brain activity of memorizing videos are extracted using a universal brain reference system. Finally, due to the fact that fMRI scanning is expensive and time-consuming, a computational model is learned on our benchmark dataset with the objective of maximizing the correlation between the low-level audiovisual features and the fMRI-derived features using joint subspace learning. The learned model can then automatically predict the memorability of videos without fMRI scans. Evaluations on publically available image and video databases demonstrate the effectiveness of the proposed framework.
Content based information retrieval in forensic image databases.
Geradts, Zeno; Bijhold, Jurrien
2002-03-01
This paper gives an overview of the various available image databases and ways of searching these databases on image contents. The developments in research groups of searching in image databases is evaluated and compared with the forensic databases that exist. Forensic image databases of fingerprints, faces, shoeprints, handwriting, cartridge cases, drugs tablets, and tool marks are described. The developments in these fields appear to be valuable for forensic databases, especially that of the framework in MPEG-7, where the searching in image databases is standardized. In the future, the combination of the databases (also DNA-databases) and possibilities to combine these can result in stronger forensic evidence.
Third molar development in a contemporary Danish 13-25year old population.
Arge, Sara; Boldsen, Jesper Lier; Wenzel, Ann; Holmstrup, Palle; Jensen, Niels Dyrgaard; Lynnerup, Niels
2018-05-16
We present a reference database for third molar development based on a contemporary Danish population. A total of 1302 digital panoramic images were evaluated. The images were taken at a known chronological age, ranging from 13 to 25years. Third molar development was scored according to the Köhler modification of the 10-stage method of Gleiser and Hunt. We found that third molar development was generally advanced in the maxilla compared to the mandible and in males compared to females; in addition, the mandibular third molar mesial roots were generally more advanced in development than were the distal roots. There was no difference in third molar development between the left and right side of the jaws. Establishing global and robust databases on dental development is crucial for further development of forensic methods to evaluate age. Copyright © 2018. Published by Elsevier B.V.
Identifying tips for intramolecular NC-AFM imaging via in situ fingerprinting
NASA Astrophysics Data System (ADS)
Sang, Hongqian; Jarvis, Samuel P.; Zhou, Zhichao; Sharp, Peter; Moriarty, Philip; Wang, Jianbo; Wang, Yu; Kantorovich, Lev
2014-10-01
A practical experimental strategy is proposed that could potentially enable greater control of the tip apex in non-contact atomic force microscopy experiments. It is based on a preparation of a structure of interest alongside a reference surface reconstruction on the same sample. Our proposed strategy is as follows. Spectroscopy measurements are first performed on the reference surface to identify the tip apex structure using a previously collected database of responses of different tips to this surface. Next, immediately following the tip identification protocol, the surface of interest is studied (imaging, manipulation and/or spectroscopy). The prototype system we choose is the mixed Si(111)-7×7 and surface which can be prepared on the same sample with a controlled ratio of reactive and passivated regions. Using an ``in silico'' approach based on ab initio density functional calculations and a set of tips with varying chemical reactivities, we show how one can perform tip fingerprinting using the Si(111)-7×7 reference surface. Then it is found by examining the imaging of a naphthalene tetracarboxylic diimide (NTCDI) molecule adsorbed on surface that negatively charged tips produce the best intramolecular contrast attributed to the enhancement of repulsive interactions.
Radiotherapy supporting system based on the image database using IS&C magneto-optical disk
NASA Astrophysics Data System (ADS)
Ando, Yutaka; Tsukamoto, Nobuhiro; Kunieda, Etsuo; Kubo, Atsushi
1994-05-01
Since radiation oncologists make the treatment plan by prior experience, information about previous cases is helpful in planning the radiation treatment. We have developed an supporting system for the radiation therapy. The case-based reasoning method was implemented in order to search old treatments and images of past cases. This system evaluates similarities between the current case and all stored cases (case base). The portal images of the similar cases can be retrieved for reference images, as well as treatment records which show examples of the radiation treatment. By this system radiotherapists can easily make suitable plans of the radiation therapy. This system is useful to prevent inaccurate plannings due to preconceptions and/or lack of knowledge. Images were stored into magneto-optical disks and the demographic data is recorded to the hard disk which is equipped in the personal computer. Images can be displayed quickly on the radiotherapist's demands. The radiation oncologist can refer past cases which are recorded in the case base and decide the radiation treatment of the current case. The file and data format of magneto-optical disk is the IS&C format. This format provides the interchangeability and reproducibility of the medical information which includes images and other demographic data.
Chapter 4 - The LANDFIRE Prototype Project reference database
John F. Caratti
2006-01-01
This chapter describes the data compilation process for the Landscape Fire and Resource Management Planning Tools Prototype Project (LANDFIRE Prototype Project) reference database (LFRDB) and explains the reference data applications for LANDFIRE Prototype maps and models. The reference database formed the foundation for all LANDFIRE tasks. All products generated by the...
DICOM image integration into an electronic medical record using thin viewing clients
NASA Astrophysics Data System (ADS)
Stewart, Brent K.; Langer, Steven G.; Taira, Ricky K.
1998-07-01
Purpose -- To integrate radiological DICOM images into our currently existing web-browsable Electronic Medical Record (MINDscape). Over the last five years the University of Washington has created a clinical data repository combining in a distributed relational database information from multiple departmental databases (MIND). A text-based view of this data called the Mini Medical Record (MMR) has been available for three years. MINDscape, unlike the text based MMR, provides a platform independent, web browser view of the MIND dataset that can easily be linked to other information resources on the network. We have now added the integration of radiological images into MINDscape through a DICOM webserver. Methods/New Work -- we have integrated a commercial webserver that acts as a DICOM Storage Class Provider to our, computed radiography (CR), computed tomography (CT), digital fluoroscopy (DF), magnetic resonance (MR) and ultrasound (US) scanning devices. These images can be accessed through CGI queries or by linking the image server database using ODBC or SQL gateways. This allows the use of dynamic HTML links to the images on the DICOM webserver from MINDscape, so that the radiology reports already resident in the MIND repository can be married with the associated images through the unique examination accession number generated by our Radiology Information System (RIS). The web browser plug-in used provides a wavelet decompression engine (up to 16-bits per pixel) and performs the following image manipulation functions: window/level, flip, invert, sort, rotate, zoom, cine-loop and save as JPEG. Results -- Radiological DICOM image sets (CR, CT, MR and US) are displayed with associated exam reports for referring physician and clinicians anywhere within the widespread academic medical center on PCs, Macs, X-terminals and Unix computers. This system is also being used for home teleradiology application. Conclusion -- Radiological DICOM images can be made available medical center wide to physicians quickly using low-cost and ubiquitous, thin client browsing technology and wavelet compression.
Wright, Judy M; Cottrell, David J; Mir, Ghazala
2014-07-01
To determine the optimal databases to search for studies of faith-sensitive interventions for treating depression. We examined 23 health, social science, religious, and grey literature databases searched for an evidence synthesis. Databases were prioritized by yield of (1) search results, (2) potentially relevant references identified during screening, (3) included references contained in the synthesis, and (4) included references that were available in the database. We assessed the impact of databases beyond MEDLINE, EMBASE, and PsycINFO by their ability to supply studies identifying new themes and issues. We identified pragmatic workload factors that influence database selection. PsycINFO was the best performing database within all priority lists. ArabPsyNet, CINAHL, Dissertations and Theses, EMBASE, Global Health, Health Management Information Consortium, MEDLINE, PsycINFO, and Sociological Abstracts were essential for our searches to retrieve the included references. Citation tracking activities and the personal library of one of the research teams made significant contributions of unique, relevant references. Religion studies databases (Am Theo Lib Assoc, FRANCIS) did not provide unique, relevant references. Literature searches for reviews and evidence syntheses of religion and health studies should include social science, grey literature, non-Western databases, personal libraries, and citation tracking activities. Copyright © 2014 Elsevier Inc. All rights reserved.
All-automatic swimmer tracking system based on an optimized scaled composite JTC technique
NASA Astrophysics Data System (ADS)
Benarab, D.; Napoléon, T.; Alfalou, A.; Verney, A.; Hellard, P.
2016-04-01
In this paper, an all-automatic optimized JTC based swimmer tracking system is proposed and evaluated on real video database outcome from national and international swimming competitions (French National Championship, Limoges 2015, FINA World Championships, Barcelona 2013 and Kazan 2015). First, we proposed to calibrate the swimming pool using the DLT algorithm (Direct Linear Transformation). DLT calculates the homography matrix given a sufficient set of correspondence points between pixels and metric coordinates: i.e. DLT takes into account the dimensions of the swimming pool and the type of the swim. Once the swimming pool is calibrated, we extract the lane. Then we apply a motion detection approach to detect globally the swimmer in this lane. Next, we apply our optimized Scaled Composite JTC which consists of creating an adapted input plane that contains the predicted region and the head reference image. This latter is generated using a composite filter of fin images chosen from the database. The dimension of this reference will be scaled according to the ratio between the head's dimension and the width of the swimming lane. Finally, applying the proposed approach improves the performances of our previous tracking method by adding a detection module in order to achieve an all-automatic swimmer tracking system.
Class Energy Image Analysis for Video Sensor-Based Gait Recognition: A Review
Lv, Zhuowen; Xing, Xianglei; Wang, Kejun; Guan, Donghai
2015-01-01
Gait is a unique perceptible biometric feature at larger distances, and the gait representation approach plays a key role in a video sensor-based gait recognition system. Class Energy Image is one of the most important gait representation methods based on appearance, which has received lots of attentions. In this paper, we reviewed the expressions and meanings of various Class Energy Image approaches, and analyzed the information in the Class Energy Images. Furthermore, the effectiveness and robustness of these approaches were compared on the benchmark gait databases. We outlined the research challenges and provided promising future directions for the field. To the best of our knowledge, this is the first review that focuses on Class Energy Image. It can provide a useful reference in the literature of video sensor-based gait representation approach. PMID:25574935
Craniofacial imaging informatics and technology development.
Vannier, M W
2003-01-01
'Craniofacial imaging informatics' refers to image and related scientific data from the dentomaxillofacial complex, and application of 'informatics techniques' (derived from disciplines such as applied mathematics, computer science and statistics) to understand and organize the information associated with the data. Major trends in information technology determine the progress made in craniofacial imaging and informatics. These trends include industry consolidation, disruptive technologies, Moore's law, electronic atlases and on-line databases. Each of these trends is explained and documented, relative to their influence on craniofacial imaging. Craniofacial imaging is influenced by major trends that affect all medical imaging and related informatics applications. The introduction of cone beam craniofacial computed tomography scanners is an example of a disruptive technology entering the field. An important opportunity lies in the integration of biologic knowledge repositories with craniofacial images. The progress of craniofacial imaging will continue subject to limitations imposed by the underlying technologies, especially imaging informatics. Disruptive technologies will play a major role in the evolution of this field.
A state-of-the-art review on segmentation algorithms in intravascular ultrasound (IVUS) images.
Katouzian, Amin; Angelini, Elsa D; Carlier, Stéphane G; Suri, Jasjit S; Navab, Nassir; Laine, Andrew F
2012-09-01
Over the past two decades, intravascular ultrasound (IVUS) image segmentation has remained a challenge for researchers while the use of this imaging modality is rapidly growing in catheterization procedures and in research studies. IVUS provides cross-sectional grayscale images of the arterial wall and the extent of atherosclerotic plaques with high spatial resolution in real time. In this paper, we review recently developed image processing methods for the detection of media-adventitia and luminal borders in IVUS images acquired with different transducers operating at frequencies ranging from 20 to 45 MHz. We discuss methodological challenges, lack of diversity in reported datasets, and weaknesses of quantification metrics that make IVUS segmentation still an open problem despite all efforts. In conclusion, we call for a common reference database, validation metrics, and ground-truth definition with which new and existing algorithms could be benchmarked.
An inductive method for automatic generation of referring physician prefetch rules for PACS.
Okura, Yasuhiko; Matsumura, Yasushi; Harauchi, Hajime; Sukenobu, Yoshiharu; Kou, Hiroko; Kohyama, Syunsuke; Yasuda, Norihiro; Yamamoto, Yuichiro; Inamura, Kiyonari
2002-12-01
To prefetch images in a hospital-wide picture archiving and communication system (PACS), a rule must be devised to permit accurate selection of examinations in which a patient's images are stored. We developed an inductive method to compose prefetch rules from practical data which were obtained in a hospital using a decision tree algorithm. Our methods were evaluated on data acquired in Osaka University Hospital for one month. The data collected consisted of 58,617 cases of consultation reservations, 643,797 examination histories of patients, and 323,993 records of image requests in PACS. Four parameters indicating whether the images of the patient were requested or not for each consultation reservation were derived from the database. As a result, the successful selection sensitivity for consultations in which images were requested was approximately 0.8, and the specificity for excluding consultations accurately where images were not requested was approximately 0.7.
dipIQ: Blind Image Quality Assessment by Learning-to-Rank Discriminable Image Pairs.
Ma, Kede; Liu, Wentao; Liu, Tongliang; Wang, Zhou; Tao, Dacheng
2017-05-26
Objective assessment of image quality is fundamentally important in many image processing tasks. In this work, we focus on learning blind image quality assessment (BIQA) models which predict the quality of a digital image with no access to its original pristine-quality counterpart as reference. One of the biggest challenges in learning BIQA models is the conflict between the gigantic image space (which is in the dimension of the number of image pixels) and the extremely limited reliable ground truth data for training. Such data are typically collected via subjective testing, which is cumbersome, slow, and expensive. Here we first show that a vast amount of reliable training data in the form of quality-discriminable image pairs (DIP) can be obtained automatically at low cost by exploiting largescale databases with diverse image content. We then learn an opinion-unaware BIQA (OU-BIQA, meaning that no subjective opinions are used for training) model using RankNet, a pairwise learning-to-rank (L2R) algorithm, from millions of DIPs, each associated with a perceptual uncertainty level, leading to a DIP inferred quality (dipIQ) index. Extensive experiments on four benchmark IQA databases demonstrate that dipIQ outperforms state-of-the-art OU-BIQA models. The robustness of dipIQ is also significantly improved as confirmed by the group MAximum Differentiation (gMAD) competition method. Furthermore, we extend the proposed framework by learning models with ListNet (a listwise L2R algorithm) on quality-discriminable image lists (DIL). The resulting DIL Inferred Quality (dilIQ) index achieves an additional performance gain.
[Selected aspects of computer-assisted literature management].
Reiss, M; Reiss, G
1998-01-01
We want to report about our own experiences with a database manager. Bibliography database managers are used to manage information resources: specifically, to maintain a database to references and create bibliographies and reference lists for written works. A database manager allows to enter summary information (record) for articles, book sections, books, dissertations, conference proceedings, and so on. Other features that may be included in a database manager include the ability to import references from different sources, such as MEDLINE. The word processing components allow to generate reference list and bibliographies in a variety of different styles, generates a reference list from a word processor manuscript. The function and the use of the software package EndNote 2 for Windows are described. Its advantages in fulfilling different requirements for the citation style and the sort order of reference lists are emphasized.
Small PACS implementation using publicly available software
NASA Astrophysics Data System (ADS)
Passadore, Diego J.; Isoardi, Roberto A.; Gonzalez Nicolini, Federico J.; Ariza, P. P.; Novas, C. V.; Omati, S. A.
1998-07-01
Building cost effective PACS solutions is a main concern in developing countries. Hardware and software components are generally much more expensive than in developed countries and also more tightened financial constraints are the main reasons contributing to a slow rate of implementation of PACS. The extensive use of Internet for sharing resources and information has brought a broad number of freely available software packages to an ever-increasing number of users. In the field of medical imaging is possible to find image format conversion packages, DICOM compliant servers for all kinds of service classes, databases, web servers, image visualization, manipulation and analysis tools, etc. This paper describes a PACS implementation for review and storage built on freely available software. It currently integrates four diagnostic modalities (PET, CT, MR and NM), a Radiotherapy Treatment Planning workstation and several computers in a local area network, for image storage, database management and image review, processing and analysis. It also includes a web-based application that allows remote users to query the archive for studies from any workstation and to view the corresponding images and reports. We conclude that the advantage of using this approach is twofold. It allows a full understanding of all the issues involved in the implementation of a PACS and also contributes to keep costs down while enabling the development of a functional system for storage, distribution and review that can prove to be helpful for radiologists and referring physicians.
Charbonnier, Amandine; Knapp, Jenny; Demonmerot, Florent; Bresson-Hadni, Solange; Raoul, Francis; Grenouillet, Frédéric; Millon, Laurence; Vuitton, Dominique Angèle; Damy, Sylvie
2014-01-01
Alveolar echinococcosis (AE) is an endemic zoonosis in France due to the cestode Echinococcus multilocularis. The French National Reference Centre for Alveolar Echinococcosis (CNR-EA), connected to the FrancEchino network, is responsible for recording all AE cases diagnosed in France. Administrative, epidemiological and medical information on the French AE cases may currently be considered exhaustive only on the diagnosis time. To constitute a reference data set, an information system (IS) was developed thanks to a relational database management system (MySQL language). The current data set will evolve towards a dynamic surveillance system, including follow-up data (e.g. imaging, serology) and will be connected to environmental and parasitological data relative to E. multilocularis to better understand the pathogen transmission pathway. A particularly important goal is the possible interoperability of the IS with similar European and other databases abroad; this new IS could play a supporting role in the creation of new AE registries. © A. Charbonnier et al., published by EDP Sciences, 2014.
A new data management system for the French National Registry of human alveolar echinococcosis cases
Charbonnier, Amandine; Knapp, Jenny; Demonmerot, Florent; Bresson-Hadni, Solange; Raoul, Francis; Grenouillet, Frédéric; Millon, Laurence; Vuitton, Dominique Angèle; Damy, Sylvie
2014-01-01
Alveolar echinococcosis (AE) is an endemic zoonosis in France due to the cestode Echinococcus multilocularis. The French National Reference Centre for Alveolar Echinococcosis (CNR-EA), connected to the FrancEchino network, is responsible for recording all AE cases diagnosed in France. Administrative, epidemiological and medical information on the French AE cases may currently be considered exhaustive only on the diagnosis time. To constitute a reference data set, an information system (IS) was developed thanks to a relational database management system (MySQL language). The current data set will evolve towards a dynamic surveillance system, including follow-up data (e.g. imaging, serology) and will be connected to environmental and parasitological data relative to E. multilocularis to better understand the pathogen transmission pathway. A particularly important goal is the possible interoperability of the IS with similar European and other databases abroad; this new IS could play a supporting role in the creation of new AE registries. PMID:25526544
Correlation applied to the recognition of regular geometric figures
NASA Astrophysics Data System (ADS)
Lasso, William; Morales, Yaileth; Vega, Fabio; Díaz, Leonardo; Flórez, Daniel; Torres, Cesar
2013-11-01
It developed a system capable of recognizing of regular geometric figures, the images are taken by the software automatically through a process of validating the presence of figure to the camera lens, the digitized image is compared with a database that contains previously images captured, to subsequently be recognized and finally identified using sonorous words referring to the name of the figure identified. The contribution of system set out is the fact that the acquisition of data is done in real time and using a spy smart glasses with usb interface offering an system equally optimal but much more economical. This tool may be useful as a possible application for visually impaired people can get information of surrounding environment.
Designing for Peta-Scale in the LSST Database
NASA Astrophysics Data System (ADS)
Kantor, J.; Axelrod, T.; Becla, J.; Cook, K.; Nikolaev, S.; Gray, J.; Plante, R.; Nieto-Santisteban, M.; Szalay, A.; Thakar, A.
2007-10-01
The Large Synoptic Survey Telescope (LSST), a proposed ground-based 8.4 m telescope with a 10 deg^2 field of view, will generate 15 TB of raw images every observing night. When calibration and processed data are added, the image archive, catalogs, and meta-data will grow 15 PB yr^{-1} on average. The LSST Data Management System (DMS) must capture, process, store, index, replicate, and provide open access to this data. Alerts must be triggered within 30 s of data acquisition. To do this in real-time at these data volumes will require advances in data management, database, and file system techniques. This paper describes the design of the LSST DMS and emphasizes features for peta-scale data. The LSST DMS will employ a combination of distributed database and file systems, with schema, partitioning, and indexing oriented for parallel operations. Image files are stored in a distributed file system with references to, and meta-data from, each file stored in the databases. The schema design supports pipeline processing, rapid ingest, and efficient query. Vertical partitioning reduces disk input/output requirements, horizontal partitioning allows parallel data access using arrays of servers and disks. Indexing is extensive, utilizing both conventional RAM-resident indexes and column-narrow, row-deep tag tables/covering indices that are extracted from tables that contain many more attributes. The DMS Data Access Framework is encapsulated in a middleware framework to provide a uniform service interface to all framework capabilities. This framework will provide the automated work-flow, replication, and data analysis capabilities necessary to make data processing and data quality analysis feasible at this scale.
KID Project: an internet-based digital video atlas of capsule endoscopy for research purposes.
Koulaouzidis, Anastasios; Iakovidis, Dimitris K; Yung, Diana E; Rondonotti, Emanuele; Kopylov, Uri; Plevris, John N; Toth, Ervin; Eliakim, Abraham; Wurm Johansson, Gabrielle; Marlicz, Wojciech; Mavrogenis, Georgios; Nemeth, Artur; Thorlacius, Henrik; Tontini, Gian Eugenio
2017-06-01
Capsule endoscopy (CE) has revolutionized small-bowel (SB) investigation. Computational methods can enhance diagnostic yield (DY); however, incorporating machine learning algorithms (MLAs) into CE reading is difficult as large amounts of image annotations are required for training. Current databases lack graphic annotations of pathologies and cannot be used. A novel database, KID, aims to provide a reference for research and development of medical decision support systems (MDSS) for CE. Open-source software was used for the KID database. Clinicians contribute anonymized, annotated CE images and videos. Graphic annotations are supported by an open-access annotation tool (Ratsnake). We detail an experiment based on the KID database, examining differences in SB lesion measurement between human readers and a MLA. The Jaccard Index (JI) was used to evaluate similarity between annotations by the MLA and human readers. The MLA performed best in measuring lymphangiectasias with a JI of 81 ± 6 %. The other lesion types were: angioectasias (JI 64 ± 11 %), aphthae (JI 64 ± 8 %), chylous cysts (JI 70 ± 14 %), polypoid lesions (JI 75 ± 21 %), and ulcers (JI 56 ± 9 %). MLA can perform as well as human readers in the measurement of SB angioectasias in white light (WL). Automated lesion measurement is therefore feasible. KID is currently the only open-source CE database developed specifically to aid development of MDSS. Our experiment demonstrates this potential.
National Institute of Standards and Technology Data Gateway
SRD 60 NIST ITS-90 Thermocouple Database (Web, free access) Web version of Standard Reference Database 60 and NIST Monograph 175. The database gives temperature -- electromotive force (emf) reference functions and tables for the letter-designated thermocouple types B, E, J, K, N, R, S and T. These reference functions have been adopted as standards by the American Society for Testing and Materials (ASTM) and the International Electrotechnical Commission (IEC).
NASA Astrophysics Data System (ADS)
Jia, Huizhen; Sun, Quansen; Ji, Zexuan; Wang, Tonghan; Chen, Qiang
2014-11-01
The goal of no-reference/blind image quality assessment (NR-IQA) is to devise a perceptual model that can accurately predict the quality of a distorted image as human opinions, in which feature extraction is an important issue. However, the features used in the state-of-the-art "general purpose" NR-IQA algorithms are usually natural scene statistics (NSS) based or are perceptually relevant; therefore, the performance of these models is limited. To further improve the performance of NR-IQA, we propose a general purpose NR-IQA algorithm which combines NSS-based features with perceptually relevant features. The new method extracts features in both the spatial and gradient domains. In the spatial domain, we extract the point-wise statistics for single pixel values which are characterized by a generalized Gaussian distribution model to form the underlying features. In the gradient domain, statistical features based on neighboring gradient magnitude similarity are extracted. Then a mapping is learned to predict quality scores using a support vector regression. The experimental results on the benchmark image databases demonstrate that the proposed algorithm correlates highly with human judgments of quality and leads to significant performance improvements over state-of-the-art methods.
Sunspot Pattern Classification using PCA and Neural Networks (Poster)
NASA Technical Reports Server (NTRS)
Rajkumar, T.; Thompson, D. E.; Slater, G. L.
2005-01-01
The sunspot classification scheme presented in this paper is considered as a 2-D classification problem on archived datasets, and is not a real-time system. As a first step, it mirrors the Zuerich/McIntosh historical classification system and reproduces classification of sunspot patterns based on preprocessing and neural net training datasets. Ultimately, the project intends to move from more rudimentary schemes, to develop spatial-temporal-spectral classes derived by correlating spatial and temporal variations in various wavelengths to the brightness fluctuation spectrum of the sun in those wavelengths. Once the approach is generalized, then the focus will naturally move from a 2-D to an n-D classification, where "n" includes time and frequency. Here, the 2-D perspective refers both to the actual SOH0 Michelson Doppler Imager (MDI) images that are processed, but also refers to the fact that a 2-D matrix is created from each image during preprocessing. The 2-D matrix is the result of running Principal Component Analysis (PCA) over the selected dataset images, and the resulting matrices and their eigenvalues are the objects that are stored in a database, classified, and compared. These matrices are indexed according to the standard McIntosh classification scheme.
ERIC Educational Resources Information Center
Kim, Deok-Hwan; Chung, Chin-Wan
2003-01-01
Discusses the collection fusion problem of image databases, concerned with retrieving relevant images by content based retrieval from image databases distributed on the Web. Focuses on a metaserver which selects image databases supporting similarity measures and proposes a new algorithm which exploits a probabilistic technique using Bayesian…
Cartographic analyses of geographic information available on Google Earth Images
NASA Astrophysics Data System (ADS)
Oliveira, J. C.; Ramos, J. R.; Epiphanio, J. C.
2011-12-01
The propose was to evaluate planimetric accuracy of satellite images available on database of Google Earth. These images are referents to the vicinities of the Federal Univertisity of Viçosa, Minas Gerais - Brazil. The methodology developed evaluated the geographical information of three groups of images which were in accordance to the level of detail presented in the screen images (zoom). These groups of images were labeled to Zoom 1000 (a single image for the entire study area), Zoom 100 (formed by a mosaic of 73 images) and Zoom 100 with geometric correction (this mosaic is like before, however, it was applied a geometric correction through control points). In each group of image was measured the Cartographic Accuracy based on statistical analyses and brazilian's law parameters about planimetric mapping. For this evaluation were identified 22 points in each group of image, where the coordinates of each point were compared to the coordinates of the field obtained by GPS (Global Positioning System). The Table 1 show results related to accuracy (based on a threshold equal to 0.5 mm * mapping scale) and tendency (abscissa and ordinate) between the coordinates of the image and the coordinates of field. Table 1 The geometric correction applied to the Group Zoom 100 reduced the trends identified earlier, and the statistical tests pointed a usefulness of the data for a mapping at a scale of 1/5000 with error minor than 0.5 mm * scale. The analyses proved the quality of cartographic data provided by Google, as well as the possibility of reduce the divergences of positioning present on the data. It can be concluded that it is possible to obtain geographic information database available on Google Earth, however, the level of detail (zoom) used at the time of viewing and capturing information on the screen influences the quality cartographic of the mapping. Although cartographic and thematic potential present in the database, it is important to note that both the software as data distributed by Google Earth has policies for use and distribution.
Table 1 - PLANIMETRIC ANALYSIS
NASA Astrophysics Data System (ADS)
Dziedzic, Adam; Mulawka, Jan
2014-11-01
NoSQL is a new approach to data storage and manipulation. The aim of this paper is to gain more insight into NoSQL databases, as we are still in the early stages of understanding when to use them and how to use them in an appropriate way. In this submission descriptions of selected NoSQL databases are presented. Each of the databases is analysed with primary focus on its data model, data access, architecture and practical usage in real applications. Furthemore, the NoSQL databases are compared in fields of data references. The relational databases offer foreign keys, whereas NoSQL databases provide us with limited references. An intermediate model between graph theory and relational algebra which can address the problem should be created. Finally, the proposal of a new approach to the problem of inconsistent references in Big Data storage systems is introduced.
Carey, George B; Kazantsev, Stephanie; Surati, Mosmi; Rolle, Cleo E; Kanteti, Archana; Sadiq, Ahad; Bahroos, Neil; Raumann, Brigitte; Madduri, Ravi; Dave, Paul; Starkey, Adam; Hensing, Thomas; Husain, Aliya N; Vokes, Everett E; Vigneswaran, Wickii; Armato, Samuel G; Kindler, Hedy L; Salgia, Ravi
2012-01-01
Objective An area of need in cancer informatics is the ability to store images in a comprehensive database as part of translational cancer research. To meet this need, we have implemented a novel tandem database infrastructure that facilitates image storage and utilisation. Background We had previously implemented the Thoracic Oncology Program Database Project (TOPDP) database for our translational cancer research needs. While useful for many research endeavours, it is unable to store images, hence our need to implement an imaging database which could communicate easily with the TOPDP database. Methods The Thoracic Oncology Research Program (TORP) imaging database was designed using the Research Electronic Data Capture (REDCap) platform, which was developed by Vanderbilt University. To demonstrate proof of principle and evaluate utility, we performed a retrospective investigation into tumour response for malignant pleural mesothelioma (MPM) patients treated at the University of Chicago Medical Center with either of two analogous chemotherapy regimens and consented to at least one of two UCMC IRB protocols, 9571 and 13473A. Results A cohort of 22 MPM patients was identified using clinical data in the TOPDP database. After measurements were acquired, two representative CT images and 0–35 histological images per patient were successfully stored in the TORP database, along with clinical and demographic data. Discussion We implemented the TORP imaging database to be used in conjunction with our comprehensive TOPDP database. While it requires an additional effort to use two databases, our database infrastructure facilitates more comprehensive translational research. Conclusions The investigation described herein demonstrates the successful implementation of this novel tandem imaging database infrastructure, as well as the potential utility of investigations enabled by it. The data model presented here can be utilised as the basis for further development of other larger, more streamlined databases in the future. PMID:23103606
NASA Astrophysics Data System (ADS)
Hostache, Renaud; Chini, Marco; Matgen, Patrick; Giustarini, Laura
2013-04-01
There is a clear need for developing innovative processing chains based on earth observation (EO) data to generate products supporting emergency response and flood management at a global scale. Here an automatic flood mapping application is introduced. The latter is currently hosted on the Grid Processing on Demand (G-POD) Fast Access to Imagery (Faire) environment of the European Space Agency. The main objective of the online application is to deliver flooded areas using both recent and historical acquisitions of SAR data in an operational framework. It is worth mentioning that the method can be applied to both medium and high resolution SAR images. The flood mapping application consists of two main blocks: 1) A set of query tools for selecting the "crisis image" and the optimal corresponding pre-flood "reference image" from the G-POD archive. 2) An algorithm for extracting flooded areas using the previously selected "crisis image" and "reference image". The proposed method is a hybrid methodology, which combines histogram thresholding, region growing and change detection as an approach enabling the automatic, objective and reliable flood extent extraction from SAR images. The method is based on the calibration of a statistical distribution of "open water" backscatter values inferred from SAR images of floods. Change detection with respect to a pre-flood reference image helps reducing over-detection of inundated areas. The algorithms are computationally efficient and operate with minimum data requirements, considering as input data a flood image and a reference image. Stakeholders in flood management and service providers are able to log onto the flood mapping application to get support for the retrieval, from the rolling archive, of the most appropriate pre-flood reference image. Potential users will also be able to apply the implemented flood delineation algorithm. Case studies of several recent high magnitude flooding events (e.g. July 2007 Severn River flood, UK and March 2010 Red River flood, US) observed by high-resolution SAR sensors as well as airborne photography highlight advantages and limitations of the online application. A mid-term target is the exploitation of ESA SENTINEL 1 SAR data streams. In the long term it is foreseen to develop a potential extension of the application for systematically extracting flooded areas from all SAR images acquired on a daily, weekly or monthly basis. On-going research activities investigate the usefulness of the method for mapping flood hazard at global scale using databases of historic SAR remote sensing-derived flood inundation maps.
Albà, Xènia; Figueras I Ventura, Rosa M; Lekadir, Karim; Tobon-Gomez, Catalina; Hoogendoorn, Corné; Frangi, Alejandro F
2014-12-01
Magnetic resonance imaging (MRI), specifically late-enhanced MRI, is the standard clinical imaging protocol to assess cardiac viability. Segmentation of myocardial walls is a prerequisite for this assessment. Automatic and robust multisequence segmentation is required to support processing massive quantities of data. A generic rule-based framework to automatically segment the left ventricle myocardium is presented here. We use intensity information, and include shape and interslice smoothness constraints, providing robustness to subject- and study-specific changes. Our automatic initialization considers the geometrical and appearance properties of the left ventricle, as well as interslice information. The segmentation algorithm uses a decoupled, modified graph cut approach with control points, providing a good balance between flexibility and robustness. The method was evaluated on late-enhanced MRI images from a 20-patient in-house database, and on cine-MRI images from a 15-patient open access database, both using as reference manually delineated contours. Segmentation agreement, measured using the Dice coefficient, was 0.81±0.05 and 0.92±0.04 for late-enhanced MRI and cine-MRI, respectively. The method was also compared favorably to a three-dimensional Active Shape Model approach. The experimental validation with two magnetic resonance sequences demonstrates increased accuracy and versatility. © 2013 Wiley Periodicals, Inc.
Estey, Mathew P; Cohen, Ashley H; Colantonio, David A; Chan, Man Khun; Marvasti, Tina Binesh; Randell, Edward; Delvin, Edgard; Cousineau, Jocelyne; Grey, Vijaylaxmi; Greenway, Donald; Meng, Qing H; Jung, Benjamin; Bhuiyan, Jalaluddin; Seccombe, David; Adeli, Khosrow
2013-09-01
The CALIPER program recently established a comprehensive database of age- and sex-stratified pediatric reference intervals for 40 biochemical markers. However, this database was only directly applicable for Abbott ARCHITECT assays. We therefore sought to expand the scope of this database to biochemical assays from other major manufacturers, allowing for a much wider application of the CALIPER database. Based on CLSI C28-A3 and EP9-A2 guidelines, CALIPER reference intervals were transferred (using specific statistical criteria) to assays performed on four other commonly used clinical chemistry platforms including Beckman Coulter DxC800, Ortho Vitros 5600, Roche Cobas 6000, and Siemens Vista 1500. The resulting reference intervals were subjected to a thorough validation using 100 reference specimens (healthy community children and adolescents) from the CALIPER bio-bank, and all testing centers participated in an external quality assessment (EQA) evaluation. In general, the transferred pediatric reference intervals were similar to those established in our previous study. However, assay-specific differences in reference limits were observed for many analytes, and in some instances were considerable. The results of the EQA evaluation generally mimicked the similarities and differences in reference limits among the five manufacturers' assays. In addition, the majority of transferred reference intervals were validated through the analysis of CALIPER reference samples. This study greatly extends the utility of the CALIPER reference interval database which is now directly applicable for assays performed on five major analytical platforms in clinical use, and should permit the worldwide application of CALIPER pediatric reference intervals. Copyright © 2013 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
A similarity learning approach to content-based image retrieval: application to digital mammography.
El-Naqa, Issam; Yang, Yongyi; Galatsanos, Nikolas P; Nishikawa, Robert M; Wernick, Miles N
2004-10-01
In this paper, we describe an approach to content-based retrieval of medical images from a database, and provide a preliminary demonstration of our approach as applied to retrieval of digital mammograms. Content-based image retrieval (CBIR) refers to the retrieval of images from a database using information derived from the images themselves, rather than solely from accompanying text indices. In the medical-imaging context, the ultimate aim of CBIR is to provide radiologists with a diagnostic aid in the form of a display of relevant past cases, along with proven pathology and other suitable information. CBIR may also be useful as a training tool for medical students and residents. The goal of information retrieval is to recall from a database information that is relevant to the user's query. The most challenging aspect of CBIR is the definition of relevance (similarity), which is used to guide the retrieval machine. In this paper, we pursue a new approach, in which similarity is learned from training examples provided by human observers. Specifically, we explore the use of neural networks and support vector machines to predict the user's notion of similarity. Within this framework we propose using a hierarchal learning approach, which consists of a cascade of a binary classifier and a regression module to optimize retrieval effectiveness and efficiency. We also explore how to incorporate online human interaction to achieve relevance feedback in this learning framework. Our experiments are based on a database consisting of 76 mammograms, all of which contain clustered microcalcifications (MCs). Our goal is to retrieve mammogram images containing similar MC clusters to that in a query. The performance of the retrieval system is evaluated using precision-recall curves computed using a cross-validation procedure. Our experimental results demonstrate that: 1) the learning framework can accurately predict the perceptual similarity reported by human observers, thereby serving as a basis for CBIR; 2) the learning-based framework can significantly outperform a simple distance-based similarity metric; 3) the use of the hierarchical two-stage network can improve retrieval performance; and 4) relevance feedback can be effectively incorporated into this learning framework to achieve improvement in retrieval precision based on online interaction with users; and 5) the retrieved images by the network can have predicting value for the disease condition of the query.
Design and deployment of a large brain-image database for clinical and nonclinical research
NASA Astrophysics Data System (ADS)
Yang, Guo Liang; Lim, Choie Cheio Tchoyoson; Banukumar, Narayanaswami; Aziz, Aamer; Hui, Francis; Nowinski, Wieslaw L.
2004-04-01
An efficient database is an essential component of organizing diverse information on image metadata and patient information for research in medical imaging. This paper describes the design, development and deployment of a large database system serving as a brain image repository that can be used across different platforms in various medical researches. It forms the infrastructure that links hospitals and institutions together and shares data among them. The database contains patient-, pathology-, image-, research- and management-specific data. The functionalities of the database system include image uploading, storage, indexing, downloading and sharing as well as database querying and management with security and data anonymization concerns well taken care of. The structure of database is multi-tier client-server architecture with Relational Database Management System, Security Layer, Application Layer and User Interface. Image source adapter has been developed to handle most of the popular image formats. The database has a user interface based on web browsers and is easy to handle. We have used Java programming language for its platform independency and vast function libraries. The brain image database can sort data according to clinically relevant information. This can be effectively used in research from the clinicians" points of view. The database is suitable for validation of algorithms on large population of cases. Medical images for processing could be identified and organized based on information in image metadata. Clinical research in various pathologies can thus be performed with greater efficiency and large image repositories can be managed more effectively. The prototype of the system has been installed in a few hospitals and is working to the satisfaction of the clinicians.
Lossef, S V; Schwartz, L H
1990-09-01
A computerized reference system for radiology journal articles was developed by using an IBM-compatible personal computer with a hand-held optical scanner and optical character recognition software. This allows direct entry of scanned text from printed material into word processing or data-base files. Additionally, line diagrams and photographs of radiographs can be incorporated into these files. A text search and retrieval software program enables rapid searching for keywords in scanned documents. The hand scanner and software programs are commercially available, relatively inexpensive, and easily used. This permits construction of a personalized radiology literature file of readily accessible text and images requiring minimal typing or keystroke entry.
Mobile object retrieval in server-based image databases
NASA Astrophysics Data System (ADS)
Manger, D.; Pagel, F.; Widak, H.
2013-05-01
The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.
CVD2014-A Database for Evaluating No-Reference Video Quality Assessment Algorithms.
Nuutinen, Mikko; Virtanen, Toni; Vaahteranoksa, Mikko; Vuori, Tero; Oittinen, Pirkko; Hakkinen, Jukka
2016-07-01
In this paper, we present a new video database: CVD2014-Camera Video Database. In contrast to previous video databases, this database uses real cameras rather than introducing distortions via post-processing, which results in a complex distortion space in regard to the video acquisition process. CVD2014 contains a total of 234 videos that are recorded using 78 different cameras. Moreover, this database contains the observer-specific quality evaluation scores rather than only providing mean opinion scores. We have also collected open-ended quality descriptions that are provided by the observers. These descriptions were used to define the quality dimensions for the videos in CVD2014. The dimensions included sharpness, graininess, color balance, darkness, and jerkiness. At the end of this paper, a performance study of image and video quality algorithms for predicting the subjective video quality is reported. For this performance study, we proposed a new performance measure that accounts for observer variance. The performance study revealed that there is room for improvement regarding the video quality assessment algorithms. The CVD2014 video database has been made publicly available for the research community. All video sequences and corresponding subjective ratings can be obtained from the CVD2014 project page (http://www.helsinki.fi/psychology/groups/visualcognition/).
Geodemographic segmentation systems for screening health data.
Openshaw, S; Blake, M
1995-01-01
AIM--To describe how geodemographic segmentation systems might be useful as a quick and easy way of exploring postcoded health databases for potential interesting patterns related to deprivation and other socioeconomic characteristics. DESIGN AND SETTING--This is demonstrated using GB Profiles, a freely available geodemographic classification system developed at Leeds University. It is used here to screen a database of colorectal cancer registrations as a first step in the analysis of that data. RESULTS AND CONCLUSION--Conventional geodemographics is a fairly simple technology and a number of outstanding methodological problems are identified. A solution to some problems is illustrated by using neural net based classifiers and then by reference to a more sophisticated geodemographic approach via a data optimal segmentation technique. Images PMID:8594132
Geocoding and stereo display of tropical forest multisensor datasets
NASA Technical Reports Server (NTRS)
Welch, R.; Jordan, T. R.; Luvall, J. C.
1990-01-01
Concern about the future of tropical forests has led to a demand for geocoded multisensor databases that can be used to assess forest structure, deforestation, thermal response, evapotranspiration, and other parameters linked to climate change. In response to studies being conducted at the Braulino Carrillo National Park, Costa Rica, digital satellite and aircraft images recorded by Landsat TM, SPOT HRV, Thermal Infrared Multispectral Scanner, and Calibrated Airborne Multispectral Scanner sensors were placed in register using the Landsat TM image as the reference map. Despite problems caused by relief, multitemporal datasets, and geometric distortions in the aircraft images, registration was accomplished to within + or - 20 m (+ or - 1 data pixel). A digital elevation model constructed from a multisensor Landsat TM/SPOT stereopair proved useful for generating perspective views of the rugged, forested terrain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Femec, D.A.
This report discusses the sample tracking database in use at the Idaho National Engineering Laboratory (INEL) by the Radiation Measurements Laboratory (RML) and Analytical Radiochemistry. The database was designed in-house to meet the specific needs of the RML and Analytical Radiochemistry. The report consists of two parts, a user`s guide and a reference guide. The user`s guide presents some of the fundamentals needed by anyone who will be using the database via its user interface. The reference guide describes the design of both the database and the user interface. Briefly mentioned in the reference guide are the code-generating tools, CREATE-SCHEMAmore » and BUILD-SCREEN, written to automatically generate code for the database and its user interface. The appendices contain the input files used by the these tools to create code for the sample tracking database. The output files generated by these tools are also included in the appendices.« less
NASA Astrophysics Data System (ADS)
Wallace, K.; Leonard, G.; Stewart, C.; Wilson, T. M.; Randall, M.; Stovall, W. K.
2015-12-01
The internationally collaborative volcanic ash website (http://volcanoes.usgs.gov/ash/) has been an important global information resource for ashfall preparedness and impact guidance since 2004. Recent volcanic ashfalls with significant local, regional, and global impacts highlighted the need to improve the website to make it more accessible and pertinent to users worldwide. Recently, the Volcanic Ash Impacts Working Group (Cities and Volcanoes Commission of IAVCEI) redesigned and modernized the website. Improvements include 1) a database-driven back end, 2) reorganized menu navigation, 3) language translation, 4) increased downloadable content, 5) addition of ash-impact case studies, 7) expanded and updated references , 8) an image database, and 9) inclusion of cooperating organization's logos. The database-driven platform makes the website more dynamic and efficient to operate and update. New menus provide information about specific impact topics (buildings, transportation, power, health, agriculture, water and waste water, equipment and communications, clean up) and updated content has been added throughout all topics. A new "for scientists" menu includes information on ash collection and analysis. Website translation using Google translate will significantly increase user base. Printable resources (e.g. checklists, pamphlets, posters) provide information to people without Internet access. Ash impact studies are used to improve mitigation measures during future eruptions, and links to case studies will assist communities' preparation and response plans. The Case Studies menu is intended to be a living topic area, growing as new case studies are published. A database of all images from the website allows users to access larger resolution images and additional descriptive details. Logos clarify linkages among key contributors and assure users that the site is authoritative and science-based.
Comparison of eye imaging pattern recognition using neural network
NASA Astrophysics Data System (ADS)
Bukhari, W. M.; Syed A., M.; Nasir, M. N. M.; Sulaima, M. F.; Yahaya, M. S.
2015-05-01
The beauty of eye recognition system that it is used in automatic identifying and verifies a human weather from digital images or video source. There are various behaviors of the eye such as the color of the iris, size of pupil and shape of the eye. This study represents the analysis, design and implementation of a system for recognition of eye imaging. All the eye images that had been captured from the webcam in RGB format must through several techniques before it can be input for the pattern and recognition processes. The result shows that the final value of weight and bias after complete training 6 eye images for one subject is memorized by the neural network system and be the reference value of the weight and bias for the testing part. The target classifies to 5 different types for 5 subjects. The eye images can recognize the subject based on the target that had been set earlier during the training process. When the values between new eye image and the eye image in the database are almost equal, it is considered the eye image is matched.
Method for the reduction of image content redundancy in large image databases
Tobin, Kenneth William; Karnowski, Thomas P.
2010-03-02
A method of increasing information content for content-based image retrieval (CBIR) systems includes the steps of providing a CBIR database, the database having an index for a plurality of stored digital images using a plurality of feature vectors, the feature vectors corresponding to distinct descriptive characteristics of the images. A visual similarity parameter value is calculated based on a degree of visual similarity between features vectors of an incoming image being considered for entry into the database and feature vectors associated with a most similar of the stored images. Based on said visual similarity parameter value it is determined whether to store or how long to store the feature vectors associated with the incoming image in the database.
Digital Dental X-ray Database for Caries Screening
NASA Astrophysics Data System (ADS)
Rad, Abdolvahab Ehsani; Rahim, Mohd Shafry Mohd; Rehman, Amjad; Saba, Tanzila
2016-06-01
Standard database is the essential requirement to compare the performance of image analysis techniques. Hence the main issue in dental image analysis is the lack of available image database which is provided in this paper. Periapical dental X-ray images which are suitable for any analysis and approved by many dental experts are collected. This type of dental radiograph imaging is common and inexpensive, which is normally used for dental disease diagnosis and abnormalities detection. Database contains 120 various Periapical X-ray images from top to bottom jaw. Dental digital database is constructed to provide the source for researchers to use and compare the image analysis techniques and improve or manipulate the performance of each technique.
KID Project: an internet-based digital video atlas of capsule endoscopy for research purposes
Koulaouzidis, Anastasios; Iakovidis, Dimitris K.; Yung, Diana E.; Rondonotti, Emanuele; Kopylov, Uri; Plevris, John N.; Toth, Ervin; Eliakim, Abraham; Wurm Johansson, Gabrielle; Marlicz, Wojciech; Mavrogenis, Georgios; Nemeth, Artur; Thorlacius, Henrik; Tontini, Gian Eugenio
2017-01-01
Background and aims Capsule endoscopy (CE) has revolutionized small-bowel (SB) investigation. Computational methods can enhance diagnostic yield (DY); however, incorporating machine learning algorithms (MLAs) into CE reading is difficult as large amounts of image annotations are required for training. Current databases lack graphic annotations of pathologies and cannot be used. A novel database, KID, aims to provide a reference for research and development of medical decision support systems (MDSS) for CE. Methods Open-source software was used for the KID database. Clinicians contribute anonymized, annotated CE images and videos. Graphic annotations are supported by an open-access annotation tool (Ratsnake). We detail an experiment based on the KID database, examining differences in SB lesion measurement between human readers and a MLA. The Jaccard Index (JI) was used to evaluate similarity between annotations by the MLA and human readers. Results The MLA performed best in measuring lymphangiectasias with a JI of 81 ± 6 %. The other lesion types were: angioectasias (JI 64 ± 11 %), aphthae (JI 64 ± 8 %), chylous cysts (JI 70 ± 14 %), polypoid lesions (JI 75 ± 21 %), and ulcers (JI 56 ± 9 %). Conclusion MLA can perform as well as human readers in the measurement of SB angioectasias in white light (WL). Automated lesion measurement is therefore feasible. KID is currently the only open-source CE database developed specifically to aid development of MDSS. Our experiment demonstrates this potential. PMID:28580415
Partitioning medical image databases for content-based queries on a Grid.
Montagnat, J; Breton, V; E Magnin, I
2005-01-01
In this paper we study the impact of executing a medical image database query application on the grid. For lowering the total computation time, the image database is partitioned into subsets to be processed on different grid nodes. A theoretical model of the application complexity and estimates of the grid execution overhead are used to efficiently partition the database. We show results demonstrating that smart partitioning of the database can lead to significant improvements in terms of total computation time. Grids are promising for content-based image retrieval in medical databases.
Implementation of a high-speed face recognition system that uses an optical parallel correlator.
Watanabe, Eriko; Kodate, Kashiko
2005-02-10
We implement a fully automatic fast face recognition system by using a 1000 frame/s optical parallel correlator designed and assembled by us. The operational speed for the 1:N (i.e., matching one image against N, where N refers to the number of images in the database) identification experiment (4000 face images) amounts to less than 1.5 s, including the preprocessing and postprocessing times. The binary real-only matched filter is devised for the sake of face recognition, and the system is optimized by the false-rejection rate (FRR) and the false-acceptance rate (FAR), according to 300 samples selected by the biometrics guideline. From trial 1:N identification experiments with the optical parallel correlator, we acquired low error rates of 2.6% FRR and 1.3% FAR. Facial images of people wearing thin glasses or heavy makeup that rendered identification difficult were identified with this system.
A Knowledge-Based System for the Computer Assisted Diagnosis of Endoscopic Images
NASA Astrophysics Data System (ADS)
Kage, Andreas; Münzenmayer, Christian; Wittenberg, Thomas
Due to the actual demographic development the use of Computer-Assisted Diagnosis (CAD) systems becomes a more important part of clinical workflows and clinical decision making. Because changes on the mucosa of the esophagus can indicate the first stage of cancerous developments, there is a large interest to detect and correctly diagnose any such lesion. We present a knowledge-based system which is able to support a physician with the interpretation and diagnosis of endoscopic images of the esophagus. Our system is designed to support the physician directly during the examination of the patient, thus prodving diagnostic assistence at the point of care (POC). Based on an interactively marked region in an endoscopic image of interest, the system provides a diagnostic suggestion, based on an annotated reference image database. Furthermore, using relevant feedback mechanisms, the results can be enhanced interactively.
Kingfisher: a system for remote sensing image database management
NASA Astrophysics Data System (ADS)
Bruzzo, Michele; Giordano, Ferdinando; Dellepiane, Silvana G.
2003-04-01
At present retrieval methods in remote sensing image database are mainly based on spatial-temporal information. The increasing amount of images to be collected by the ground station of earth observing systems emphasizes the need for database management with intelligent data retrieval capabilities. The purpose of the proposed method is to realize a new content based retrieval system for remote sensing images database with an innovative search tool based on image similarity. This methodology is quite innovative for this application, at present many systems exist for photographic images, as for example QBIC and IKONA, but they are not able to extract and describe properly remote image content. The target database is set by an archive of images originated from an X-SAR sensor (spaceborne mission, 1994). The best content descriptors, mainly texture parameters, guarantees high retrieval performances and can be extracted without losses independently of image resolution. The latter property allows DBMS (Database Management System) to process low amount of information, as in the case of quick-look images, improving time performance and memory access without reducing retrieval accuracy. The matching technique has been designed to enable image management (database population and retrieval) independently of dimensions (width and height). Local and global content descriptors are compared, during retrieval phase, with the query image and results seem to be very encouraging.
Yunker, Bryan E; Cordes, Dietmar; Scherzinger, Ann L; Dodd, Gerald D; Shandas, Robin; Feng, Yusheng; Hunter, Kendall S
2013-05-01
This study investigated the ultrasound, MRI, and CT imaging characteristics of several industrial casting and molding compounds as a precursor to the future development of durable and anatomically correct flow phantoms. A set of usability and performance criteria was established for a proposed phantom design capable of supporting liquid flow during imaging. A literature search was conducted to identify the materials and methods previously used in phantom fabrication. A database of human tissue and casting material properties was compiled to facilitate the selection of appropriate materials for testing. Several industrial casting materials were selected, procured, and used to fabricate test samples that were imaged with ultrasound, MRI, and CT. Five silicones and one polyurethane were selected for testing. Samples of all materials were successfully fabricated. All imaging modalities were able to discriminate between the materials tested. Ultrasound testing showed that three of the silicones could be imaged to a depth of at least 2.5 cm (1 in.). The RP-6400 polyurethane exhibited excellent contrast and edge detail for MRI phantoms and appears to be an excellent water reference for CT applications. The 10T and 27T silicones appear to be usable water references for MRI imaging. Based on study data and the stated selection criteria, the P-4 silicone provided sufficient material contrast to water and edge detail for use across all imaging modalities with the benefits of availability, low cost, dimensional stability, nontoxic, nonflammable, durable, cleanable, and optical clarity. The physical and imaging differences of the materials documented in this study may be useful for other applications.
NASA Astrophysics Data System (ADS)
Ali, Abder-Rahman A.; Deserno, Thomas M.
2012-02-01
Malignant melanoma is the third most frequent type of skin cancer and one of the most malignant tumors, accounting for 79% of skin cancer deaths. Melanoma is highly curable if diagnosed early and treated properly as survival rate varies between 15% and 65% from early to terminal stages, respectively. So far, melanoma diagnosis is depending subjectively on the dermatologist's expertise. Computer-aided diagnosis (CAD) systems based on epiluminescense light microscopy can provide an objective second opinion on pigmented skin lesions (PSL). This work systematically analyzes the evidence of the effectiveness of automated melanoma detection in images from a dermatoscopic device. Automated CAD applications were analyzed to estimate their diagnostic outcome. Searching online databases for publication dates between 1985 and 2011, a total of 182 studies on dermatoscopic CAD were found. With respect to the systematic selection criterions, 9 studies were included, published between 2002 and 2011. Those studies formed databases of 14,421 dermatoscopic images including both malignant "melanoma" and benign "nevus", with 8,110 images being available ranging in resolution from 150 x 150 to 1568 x 1045 pixels. Maximum and minimum of sensitivity and specificity are 100.0% and 80.0% as well as 98.14% and 61.6%, respectively. Area under the receiver operator characteristics (AUC) and pooled sensitivity, specificity and diagnostics odds ratio are respectively 0.87, 0.90, 0.81, and 15.89. So, although that automated melanoma detection showed good accuracy in terms of sensitivity, specificity, and AUC, but diagnostic performance in terms of DOR was found to be poor. This might be due to the lack of dermatoscopic image resources (ground truth) that are needed for comprehensive assessment of diagnostic performance. In future work, we aim at testing this hypothesis by joining dermatoscopic images into a unified database that serves as a standard reference for dermatology related research in PSL classification.
Model-based segmentation of hand radiographs
NASA Astrophysics Data System (ADS)
Weiler, Frank; Vogelsang, Frank
1998-06-01
An important procedure in pediatrics is to determine the skeletal maturity of a patient from radiographs of the hand. There is great interest in the automation of this tedious and time-consuming task. We present a new method for the segmentation of the bones of the hand, which allows the assessment of the skeletal maturity with an appropriate database of reference bones, similar to the atlas based methods. The proposed algorithm uses an extended active contour model for the segmentation of the hand bones, which incorporates a-priori knowledge of shape and topology of the bones in an additional energy term. This `scene knowledge' is integrated in a complex hierarchical image model, that is used for the image analysis task.
NASA Astrophysics Data System (ADS)
Protsyuk, Yu. I.; Andruk, V. N.; Kazantseva, L. V.
The paper discusses and illustrates the steps of basic processing of digitized image of astro negatives. Software for obtaining of a rectangular coordinates and photometric values of objects on photographic plates was created in the environment LINUX / MIDAS / ROMAFOT. The program can automatically process the specified number of files in FITS format with sizes up to 20000 x 20000 pixels. Other programs were made in FORTRAN and PASCAL with the ability to work in an environment of LINUX or WINDOWS. They were used for: identification of stars, separation and exclusion of diffraction satellites and double and triple exposures, elimination of image defects, reduction to the equatorial coordinates and magnitudes of a reference catalogs.
Higaki, Takumi; Kutsuna, Natsumaro; Hasezawa, Seiichiro
2013-05-16
Intracellular configuration is an important feature of cell status. Recent advances in microscopic imaging techniques allow us to easily obtain a large number of microscopic images of intracellular structures. In this circumstance, automated microscopic image recognition techniques are of extreme importance to future phenomics/visible screening approaches. However, there was no benchmark microscopic image dataset for intracellular organelles in a specified plant cell type. We previously established the Live Images of Plant Stomata (LIPS) database, a publicly available collection of optical-section images of various intracellular structures of plant guard cells, as a model system of environmental signal perception and transduction. Here we report recent updates to the LIPS database and the establishment of a database table, LIPService. We updated the LIPS dataset and established a new interface named LIPService to promote efficient inspection of intracellular structure configurations. Cell nuclei, microtubules, actin microfilaments, mitochondria, chloroplasts, endoplasmic reticulum, peroxisomes, endosomes, Golgi bodies, and vacuoles can be filtered using probe names or morphometric parameters such as stomatal aperture. In addition to the serial optical sectional images of the original LIPS database, new volume-rendering data for easy web browsing of three-dimensional intracellular structures have been released to allow easy inspection of their configurations or relationships with cell status/morphology. We also demonstrated the utility of the new LIPS image database for automated organelle recognition of images from another plant cell image database with image clustering analyses. The updated LIPS database provides a benchmark image dataset for representative intracellular structures in Arabidopsis guard cells. The newly released LIPService allows users to inspect the relationship between organellar three-dimensional configurations and morphometrical parameters.
Reference Fluid Thermodynamic and Transport Properties Database (REFPROP)
National Institute of Standards and Technology Data Gateway
SRD 23 NIST Reference Fluid Thermodynamic and Transport Properties Database (REFPROP) (PC database for purchase) NIST 23 contains revised data in a Windows version of the database, including 105 pure fluids and allowing mixtures of up to 20 components. The fluids include the environmentally acceptable HFCs, traditional HFCs and CFCs and 'natural' refrigerants like ammonia
Automatic detection of the macula in retinal fundus images using seeded mode tracking approach.
Wong, Damon W K; Liu, Jiang; Tan, Ngan-Meng; Yin, Fengshou; Cheng, Xiangang; Cheng, Ching-Yu; Cheung, Gemmy C M; Wong, Tien Yin
2012-01-01
The macula is the part of the eye responsible for central high acuity vision. Detection of the macula is an important task in retinal image processing as a landmark for subsequent disease assessment, such as for age-related macula degeneration. In this paper, we have presented an approach to automatically determine the macula centre in retinal fundus images. First contextual information on the image is combined with a statistical model to obtain an approximate macula region of interest localization. Subsequently, we propose the use of a seeded mode tracking technique to locate the macula centre. The proposed approach is tested on a large dataset composed of 482 normal images and 162 glaucoma images from the ORIGA database and an additional 96 AMD images. The results show a ROI detection of 97.5%, and 90.5% correct detection of the macula within 1/3DD from a manual reference, which outperforms other current methods. The results are promising for the use of the proposed approach to locate the macula for the detection of macula diseases from retinal images.
Animal Detection in Natural Images: Effects of Color and Image Database
Zhu, Weina; Drewes, Jan; Gegenfurtner, Karl R.
2013-01-01
The visual system has a remarkable ability to extract categorical information from complex natural scenes. In order to elucidate the role of low-level image features for the recognition of objects in natural scenes, we recorded saccadic eye movements and event-related potentials (ERPs) in two experiments, in which human subjects had to detect animals in previously unseen natural images. We used a new natural image database (ANID) that is free of some of the potential artifacts that have plagued the widely used COREL images. Color and grayscale images picked from the ANID and COREL databases were used. In all experiments, color images induced a greater N1 EEG component at earlier time points than grayscale images. We suggest that this influence of color in animal detection may be masked by later processes when measuring reation times. The ERP results of go/nogo and forced choice tasks were similar to those reported earlier. The non-animal stimuli induced bigger N1 than animal stimuli both in the COREL and ANID databases. This result indicates ultra-fast processing of animal images is possible irrespective of the particular database. With the ANID images, the difference between color and grayscale images is more pronounced than with the COREL images. The earlier use of the COREL images might have led to an underestimation of the contribution of color. Therefore, we conclude that the ANID image database is better suited for the investigation of the processing of natural scenes than other databases commonly used. PMID:24130744
[Construction of chemical information database based on optical structure recognition technique].
Lv, C Y; Li, M N; Zhang, L R; Liu, Z M
2018-04-18
To create a protocol that could be used to construct chemical information database from scientific literature quickly and automatically. Scientific literature, patents and technical reports from different chemical disciplines were collected and stored in PDF format as fundamental datasets. Chemical structures were transformed from published documents and images to machine-readable data by using the name conversion technology and optical structure recognition tool CLiDE. In the process of molecular structure information extraction, Markush structures were enumerated into well-defined monomer molecules by means of QueryTools in molecule editor ChemDraw. Document management software EndNote X8 was applied to acquire bibliographical references involving title, author, journal and year of publication. Text mining toolkit ChemDataExtractor was adopted to retrieve information that could be used to populate structured chemical database from figures, tables, and textual paragraphs. After this step, detailed manual revision and annotation were conducted in order to ensure the accuracy and completeness of the data. In addition to the literature data, computing simulation platform Pipeline Pilot 7.5 was utilized to calculate the physical and chemical properties and predict molecular attributes. Furthermore, open database ChEMBL was linked to fetch known bioactivities, such as indications and targets. After information extraction and data expansion, five separate metadata files were generated, including molecular structure data file, molecular information, bibliographical references, predictable attributes and known bioactivities. Canonical simplified molecular input line entry specification as primary key, metadata files were associated through common key nodes including molecular number and PDF number to construct an integrated chemical information database. A reasonable construction protocol of chemical information database was created successfully. A total of 174 research articles and 25 reviews published in Marine Drugs from January 2015 to June 2016 collected as essential data source, and an elementary marine natural product database named PKU-MNPD was built in accordance with this protocol, which contained 3 262 molecules and 19 821 records. This data aggregation protocol is of great help for the chemical information database construction in accuracy, comprehensiveness and efficiency based on original documents. The structured chemical information database can facilitate the access to medical intelligence and accelerate the transformation of scientific research achievements.
FreeSolv: A database of experimental and calculated hydration free energies, with input files
Mobley, David L.; Guthrie, J. Peter
2014-01-01
This work provides a curated database of experimental and calculated hydration free energies for small neutral molecules in water, along with molecular structures, input files, references, and annotations. We call this the Free Solvation Database, or FreeSolv. Experimental values were taken from prior literature and will continue to be curated, with updated experimental references and data added as they become available. Calculated values are based on alchemical free energy calculations using molecular dynamics simulations. These used the GAFF small molecule force field in TIP3P water with AM1-BCC charges. Values were calculated with the GROMACS simulation package, with full details given in references cited within the database itself. This database builds in part on a previous, 504-molecule database containing similar information. However, additional curation of both experimental data and calculated values has been done here, and the total number of molecules is now up to 643. Additional information is now included in the database, such as SMILES strings, PubChem compound IDs, accurate reference DOIs, and others. One version of the database is provided in the Supporting Information of this article, but as ongoing updates are envisioned, the database is now versioned and hosted online. In addition to providing the database, this work describes its construction process. The database is available free-of-charge via http://www.escholarship.org/uc/item/6sd403pz. PMID:24928188
Key features for ATA / ATR database design in missile systems
NASA Astrophysics Data System (ADS)
Özertem, Kemal Arda
2017-05-01
Automatic target acquisition (ATA) and automatic target recognition (ATR) are two vital tasks for missile systems, and having a robust detection and recognition algorithm is crucial for overall system performance. In order to have a robust target detection and recognition algorithm, an extensive image database is required. Automatic target recognition algorithms use the database of images in training and testing steps of algorithm. This directly affects the recognition performance, since the training accuracy is driven by the quality of the image database. In addition, the performance of an automatic target detection algorithm can be measured effectively by using an image database. There are two main ways for designing an ATA / ATR database. The first and easy way is by using a scene generator. A scene generator can model the objects by considering its material information, the atmospheric conditions, detector type and the territory. Designing image database by using a scene generator is inexpensive and it allows creating many different scenarios quickly and easily. However the major drawback of using a scene generator is its low fidelity, since the images are created virtually. The second and difficult way is designing it using real-world images. Designing image database with real-world images is a lot more costly and time consuming; however it offers high fidelity, which is critical for missile algorithms. In this paper, critical concepts in ATA / ATR database design with real-world images are discussed. Each concept is discussed in the perspective of ATA and ATR separately. For the implementation stage, some possible solutions and trade-offs for creating the database are proposed, and all proposed approaches are compared to each other with regards to their pros and cons.
Wavelet optimization for content-based image retrieval in medical databases.
Quellec, G; Lamard, M; Cazuguel, G; Cochener, B; Roux, C
2010-04-01
We propose in this article a content-based image retrieval (CBIR) method for diagnosis aid in medical fields. In the proposed system, images are indexed in a generic fashion, without extracting domain-specific features: a signature is built for each image from its wavelet transform. These image signatures characterize the distribution of wavelet coefficients in each subband of the decomposition. A distance measure is then defined to compare two image signatures and thus retrieve the most similar images in a database when a query image is submitted by a physician. To retrieve relevant images from a medical database, the signatures and the distance measure must be related to the medical interpretation of images. As a consequence, we introduce several degrees of freedom in the system so that it can be tuned to any pathology and image modality. In particular, we propose to adapt the wavelet basis, within the lifting scheme framework, and to use a custom decomposition scheme. Weights are also introduced between subbands. All these parameters are tuned by an optimization procedure, using the medical grading of each image in the database to define a performance measure. The system is assessed on two medical image databases: one for diabetic retinopathy follow up and one for screening mammography, as well as a general purpose database. Results are promising: a mean precision of 56.50%, 70.91% and 96.10% is achieved for these three databases, when five images are returned by the system. Copyright 2009 Elsevier B.V. All rights reserved.
A Sediment Testing Reference Area Database for the San Francisco Deep Ocean Disposal Site (SF-DODS)
EPA established and maintains a SF-DODS reference area database of previously-collected sediment test data. Several sets of sediment test data have been successfully collected from the SF-DODS reference area.
The comparative effectiveness of conventional and digital image libraries.
McColl, R I; Johnson, A
2001-03-01
Before introducing a hospital-wide image database to improve access, navigation and retrieval speed, a comparative study between a conventional slide library and a matching image database was undertaken to assess its relative benefits. Paired time trials and personal questionnaires revealed faster retrieval rates, higher image quality, and easier viewing for the pilot digital image database. Analysis of confidentiality, copyright and data protection exposed similar issues for both systems, thus concluding that the digital image database is a more effective library system. The authors suggest that in the future, medical images will be stored on large, professionally administered, centrally located file servers, allowing specialist image libraries to be tailored locally for individual users. The further integration of the database with web technology will enable cheap and efficient remote access for a wide range of users.
Pruitt, Kim D.; Tatusova, Tatiana; Maglott, Donna R.
2005-01-01
The National Center for Biotechnology Information (NCBI) Reference Sequence (RefSeq) database (http://www.ncbi.nlm.nih.gov/RefSeq/) provides a non-redundant collection of sequences representing genomic data, transcripts and proteins. Although the goal is to provide a comprehensive dataset representing the complete sequence information for any given species, the database pragmatically includes sequence data that are currently publicly available in the archival databases. The database incorporates data from over 2400 organisms and includes over one million proteins representing significant taxonomic diversity spanning prokaryotes, eukaryotes and viruses. Nucleotide and protein sequences are explicitly linked, and the sequences are linked to other resources including the NCBI Map Viewer and Gene. Sequences are annotated to include coding regions, conserved domains, variation, references, names, database cross-references, and other features using a combined approach of collaboration and other input from the scientific community, automated annotation, propagation from GenBank and curation by NCBI staff. PMID:15608248
The validity of arterial measurements in a South African embalmed body population.
Schoeman, Marelize; van Schoor, Albert; Suleman, Farhana; Louw, Liebie; du Toit, Peet
2018-01-01
Knowledge of the normal arterial diameter at a given anatomical point is the first step toward quantifying the severity of cardiovascular diseases. According to several studies, parameters such as weight, height, age and sex can explain morphometric variations in arterial anatomy that are observed in a population. Before the development of a reference database against which to compare the diameters of arteries in a variety of pathological conditions, the compatibility between embalmed body measurements and computed tomography (CT) measurements must first be established. The aim of this study was to compare embalmed body measurements and CT measurements at 19 different arterial sites to establish whether embalmed body measurements are a true reflection of a living population. A total of 154 embalmed bodies were randomly selected from the Department of Anatomy at the University of Pretoria and 36 embalmed bodies were randomly selected from the Department of Human Anatomy at the University of Limpopo, Medunsa Campus. Dissections were performed on the embalmed body sample and the arterial dimensions were measured with a mechanical dial-sliding caliper (accuracy of 0.01 mm). 30 CT images for each of the 19 arterial sites were retrospectively selected from the database of radiographic images at the Department of Radiology, Steve Biko Academic Hospital. Radiant, a Digital Imaging and Communications in Medicine (DICOM) viewer was used to analyze the CT images. The only statistically significant differences between the embalmed body measurements and CT measurements were found in the left common carotid- and the left subclavian arteries. The null hypothesis of no statistically significant difference between the embalmed body and CT measurements was accepted since the P value indicated no significant difference for 87% of the measurements, the exception being the left common carotid- and the left subclavian arteries. With the exception of two measurements, measurements in embalmed bodies and living people are interchangeable and concerns regarding the effect of distortion and shrinkage are unfounded. Even small changes in arterial diameter greatly influence blood flow and blood pressure, which contribute to undesirable clinical outcomes such as aortic aneurysms and aortic dissections. This study completes the first step towards the development of a reference database against which to compare the diameters of arteries in a variety of pathological conditions in a South African population.
NASA Astrophysics Data System (ADS)
Watanabe, Eriko; Ishikawa, Mami; Ohta, Maiko; Kodate, Kashiko
2005-09-01
Face recognition is used in a wide range of security systems, such as monitoring credit card use, searching for individuals with street cameras via Internet and maintaining immigration control. There are still many technical subjects under study. For instance, the number of images that can be stored is limited under the current system, and the rate of recognition must be improved to account for photo shots taken at different angles under various conditions. We implemented a fully automatic Fast Face Recognition Optical Correlator (FARCO) system by using a 1000 frame/s optical parallel correlator designed and assembled by us. Operational speed for the 1: N (i.e. matching a pair of images among N, where N refers to the number of images in the database) identification experiment (4000 face images) amounts to less than 1.5 seconds, including the pre/post processing. From trial 1: N identification experiments using FARCO, we acquired low error rates of 2.6% False Reject Rate and 1.3% False Accept Rate. By making the most of the high-speed data-processing capability of this system, much more robustness can be achieved for various recognition conditions when large-category data are registered for a single person. We propose a face recognition algorithm for the FARCO while employing a temporal image sequence of moving images. Applying this algorithm to a natural posture, a two times higher recognition rate scored compared with our conventional system. The system has high potential for future use in a variety of purposes such as search for criminal suspects by use of street and airport video cameras, registration of babies at hospitals or handling of an immeasurable number of images in a database.
Virgili, Gianni; Menchini, Francesca; Casazza, Giovanni; Hogg, Ruth; Das, Radha R; Wang, Xue; Michelessi, Manuele
2015-01-07
Diabetic macular oedema (DMO) is a thickening of the central retina, or the macula, and is associated with long-term visual loss in people with diabetic retinopathy (DR). Clinically significant macular oedema (CSMO) is the most severe form of DMO. Almost 30 years ago, the Early Treatment Diabetic Retinopathy Study (ETDRS) found that CSMO, diagnosed by means of stereoscopic fundus photography, leads to moderate visual loss in one of four people within three years. It also showed that grid or focal laser photocoagulation to the macula halves this risk. Recently, intravitreal injection of antiangiogenic drugs has also been used to try to improve vision in people with macular oedema due to DR.Optical coherence tomography (OCT) is based on optical reflectivity and is able to image retinal thickness and structure producing cross-sectional and three-dimensional images of the central retina. It is widely used because it provides objective and quantitative assessment of macular oedema, unlike the subjectivity of fundus biomicroscopic assessment which is routinely used by ophthalmologists instead of photography. Optical coherence tomography is also used for quantitative follow-up of the effects of treatment of CSMO. To determine the diagnostic accuracy of OCT for detecting DMO and CSMO, defined according to ETDRS in 1985, in patients referred to ophthalmologists after DR is detected. In the update of this review we also aimed to assess whether OCT might be considered the new reference standard for detecting DMO. We searched the Cochrane Database of Systematic Reviews (CDSR), the Database of Abstracts of Reviews of Effects (DARE), the Health Technology Assessment Database (HTA) and the NHS Economic Evaluation Database (NHSEED) (The Cochrane Library 2013, Issue 5), Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid MEDLINE Daily, Ovid OLDMEDLINE (January 1946 to June 2013), EMBASE (January 1950 to June 2013), Web of Science Conference Proceedings Citation Index - Science (CPCI-S) (January 1990 to June 2013), BIOSIS Previews (January 1969 to June 2013), MEDION and the Aggressive Research Intelligence Facility database (ARIF). We did not use any date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 25 June 2013. We checked bibliographies of relevant studies for additional references. We selected studies that assessed the diagnostic accuracy of any OCT model for detecting DMO or CSMO in patients with DR who were referred to eye clinics. Diabetic macular oedema and CSMO were diagnosed by means of fundus biomicroscopy by ophthalmologists or stereophotography by ophthalmologists or other trained personnel. Three authors independently extracted data on study characteristics and measures of accuracy. We assessed data using random-effects hierarchical sROC meta-analysis models. We included 10 studies (830 participants, 1387 eyes), published between 1998 and 2012. Prevalence of CSMO was 19% to 65% (median 50%) in nine studies with CSMO as the target condition. Study quality was often unclear or at high risk of bias for QUADAS 2 items, specifically regarding study population selection and the exclusion of participants with poor quality images. Applicablity was unclear in all studies since professionals referring patients and results of prior testing were not reported. There was a specific 'unit of analysis' issue because both eyes of the majority of participants were included in the analyses as if they were independent.In nine studies providing data on CSMO (759 participants, 1303 eyes), pooled sensitivity was 0.78 (95% confidence interval (CI) 0.72 to 0.83) and specificity was 0.86 (95% CI 0.76 to 0.93). The median central retinal thickness cut-off we selected for data extraction was 250 µm (range 230 µm to 300 µm). Central CSMO was the target condition in all but two studies and thus our results cannot be applied to non-central CSMO.Data from three studies reporting accuracy for detection of DMO (180 participants, 343 eyes) were not pooled. Sensitivities and specificities were about 0.80 in two studies and were both 1.00 in the third study.Since this review was conceived, the role of OCT has changed and has become a key ingredient of decision-making at all levels of ophthalmic care in this field. Moreover, disagreements between OCT and fundus examination are informative, especially false positives which are referred to as subclinical DMO and are at higher risk of developing clinical CSMO. Using retinal thickness thresholds lower than 300 µm and ophthalmologist's fundus assessment as reference standard, central retinal thickness measured with OCT was not sufficiently accurate to diagnose the central type of CSMO in patients with DR referred to retina clinics. However, at least OCT false positives are generally cases of subclinical DMO that cannot be detected clinically but still suffer from increased risk of disease progression. Therefore, the increasing availability of OCT devices, together with their precision and the ability to inform on retinal layer structure, now make OCT widely recognised as the new reference standard for assessment of DMO, even in some screening settings. Thus, this review will not be updated further.
NASA Astrophysics Data System (ADS)
Ibragimov, Bulat; Toesca, Diego; Chang, Daniel; Koong, Albert; Xing, Lei
2017-12-01
Automated segmentation of the portal vein (PV) for liver radiotherapy planning is a challenging task due to potentially low vasculature contrast, complex PV anatomy and image artifacts originated from fiducial markers and vasculature stents. In this paper, we propose a novel framework for automated segmentation of the PV from computed tomography (CT) images. We apply convolutional neural networks (CNNs) to learn the consistent appearance patterns of the PV using a training set of CT images with reference annotations and then enhance the PV in previously unseen CT images. Markov random fields (MRFs) were further used to smooth the results of the enhancement of the CNN enhancement and remove isolated mis-segmented regions. Finally, CNN-MRF-based enhancement was augmented with PV centerline detection that relied on PV anatomical properties such as tubularity and branch composition. The framework was validated on a clinical database with 72 CT images of patients scheduled for liver stereotactic body radiation therapy. The obtained accuracy of the segmentation was DSC= 0.83 and \
Correlation based efficient face recognition and color change detection
NASA Astrophysics Data System (ADS)
Elbouz, M.; Alfalou, A.; Brosseau, C.; Alam, M. S.; Qasmi, S.
2013-01-01
Identifying the human face via correlation is a topic attracting widespread interest. At the heart of this technique lies the comparison of an unknown target image to a known reference database of images. However, the color information in the target image remains notoriously difficult to interpret. In this paper, we report a new technique which: (i) is robust against illumination change, (ii) offers discrimination ability to detect color change between faces having similar shape, and (iii) is specifically designed to detect red colored stains (i.e. facial bleeding). We adopt the Vanderlugt correlator (VLC) architecture with a segmented phase filter and we decompose the color target image using normalized red, green, and blue (RGB), and hue, saturation, and value (HSV) scales. We propose a new strategy to effectively utilize color information in signatures for further increasing the discrimination ability. The proposed algorithm has been found to be very efficient for discriminating face subjects with different skin colors, and those having color stains in different areas of the facial image.
Matching Real and Synthetic Panoramic Images Using a Variant of Geometric Hashing
NASA Astrophysics Data System (ADS)
Li-Chee-Ming, J.; Armenakis, C.
2017-05-01
This work demonstrates an approach to automatically initialize a visual model-based tracker, and recover from lost tracking, without prior camera pose information. These approaches are commonly referred to as tracking-by-detection. Previous tracking-by-detection techniques used either fiducials (i.e. landmarks or markers) or the object's texture. The main contribution of this work is the development of a tracking-by-detection algorithm that is based solely on natural geometric features. A variant of geometric hashing, a model-to-image registration algorithm, is proposed that searches for a matching panoramic image from a database of synthetic panoramic images captured in a 3D virtual environment. The approach identifies corresponding features between the matched panoramic images. The corresponding features are to be used in a photogrammetric space resection to estimate the camera pose. The experiments apply this algorithm to initialize a model-based tracker in an indoor environment using the 3D CAD model of the building.
The use of grey literature in health sciences: a preliminary survey.
Alberani, V; De Castro Pietrangeli, P; Mazza, A M
1990-01-01
The paper describes some initiatives in the field of grey literature (GL) and the activities, from 1985, of the Italian Library Association Study Group. The major categories of GL are defined; a survey that evaluates the use of GL by end users in the health sciences is described. References in selected periodicals and databases have been analyzed for the years 1987-1988 to determine the number of articles citing GL, the number of GL citations found in selected periodicals, the various types of GL found, and the number of technical reports cited and their country of origin and intergovernmental issuing organization. Selected databases were also searched to determine the presence of GL during those same years. The paper presents the first results obtained. Images PMID:2224298
A User's Applications of Imaging Techniques: The University of Maryland Historic Textile Database.
ERIC Educational Resources Information Center
Anderson, Clarita S.
1991-01-01
Describes the incorporation of textile images into the University of Maryland Historic Textile Database by a computer user rather than a computer expert. Selection of a database management system is discussed, and PICTUREPOWER, a system that integrates photographic quality images with text and numeric information in databases, is described. (three…
Texture classification of lung computed tomography images
NASA Astrophysics Data System (ADS)
Pheng, Hang See; Shamsuddin, Siti M.
2013-03-01
Current development of algorithms in computer-aided diagnosis (CAD) scheme is growing rapidly to assist the radiologist in medical image interpretation. Texture analysis of computed tomography (CT) scans is one of important preliminary stage in the computerized detection system and classification for lung cancer. Among different types of images features analysis, Haralick texture with variety of statistical measures has been used widely in image texture description. The extraction of texture feature values is essential to be used by a CAD especially in classification of the normal and abnormal tissue on the cross sectional CT images. This paper aims to compare experimental results using texture extraction and different machine leaning methods in the classification normal and abnormal tissues through lung CT images. The machine learning methods involve in this assessment are Artificial Immune Recognition System (AIRS), Naive Bayes, Decision Tree (J48) and Backpropagation Neural Network. AIRS is found to provide high accuracy (99.2%) and sensitivity (98.0%) in the assessment. For experiments and testing purpose, publicly available datasets in the Reference Image Database to Evaluate Therapy Response (RIDER) are used as study cases.
A reference system for animal biometrics: application to the northern leopard frog
Petrovska-Delacretaz, D.; Edwards, A.; Chiasson, J.; Chollet, G.; Pilliod, D.S.
2014-01-01
Reference systems and public databases are available for human biometrics, but to our knowledge nothing is available for animal biometrics. This is surprising because animals are not required to give their agreement to be in a database. This paper proposes a reference system and database for the northern leopard frog (Lithobates pipiens). Both are available for reproducible experiments. Results of both open set and closed set experiments are given.
PHYTOTOX: DATABASE DEALING WITH THE EFFECT OF ORGANIC CHEMICALS ON TERRESTRIAL VASCULAR PLANTS
A new database, PHYTOTOX, dealing with the direct effects of exogenously supplied organic chemicals on terrestrial vascular plants is described. The database consists of two files, a Reference File and Effects File. The Reference File is a bibliographic file of published research...
Frampton, Geoff K; Kalita, Neelam; Payne, Liz; Colquitt, Jill; Loveman, Emma
2016-04-01
Natural fluorescence in the eye may be increased or decreased by diseases that affect the retina. Imaging methods based on confocal scanning laser ophthalmoscopy (cSLO) can detect this 'fundus autofluorescence' (FAF) by illuminating the retina using a specific light 'excitation wavelength'. FAF imaging could assist the diagnosis or monitoring of retinal conditions. However, the accuracy of the method for diagnosis or monitoring is unclear. To conduct a systematic review to determine the accuracy of FAF imaging using cSLO for the diagnosis or monitoring of retinal conditions, including monitoring of response to therapy. Electronic bibliographic databases; scrutiny of reference lists of included studies and relevant systematic reviews; and searches of internet pages of relevant organisations, meetings and trial registries. Databases included MEDLINE, EMBASE, The Cochrane Library, Web of Science and the Medion database of diagnostic accuracy studies. Searches covered 1990 to November 2014 and were limited to the English language. References were screened for relevance using prespecified inclusion criteria to capture a broad range of retinal conditions. Two reviewers assessed titles and abstracts independently. Full-text versions of relevant records were retrieved and screened by one reviewer and checked by a second. Data were extracted and critically appraised using the Quality Assessment of Diagnostic Accuracy Studies criteria (QUADAS) for assessing risk of bias in test accuracy studies by one reviewer and checked by a second. At all stages any reviewer disagreement was resolved through discussion or arbitration by a third reviewer. Eight primary research studies have investigated the diagnostic accuracy of FAF imaging in retinal conditions: choroidal neovascularisation (one study), reticular pseudodrusen (three studies), cystoid macular oedema (two studies) and diabetic macular oedema (two studies). Sensitivity of FAF imaging using an excitation wavelength of 488 nm was generally high (range 81-100%), but was lower (55% and 32%) in two studies using longer excitation wavelengths (514 nm and 790 nm, respectively). Specificity ranged from 34% to 100%. However, owing to limitations of the data, none of the studies provide conclusive evidence of the diagnostic accuracy of FAF imaging. No studies on the accuracy of FAF imaging for monitoring the progression of retinal conditions or response to therapy were identified. Owing to study heterogeneity, pooling of diagnostic outcomes in meta-analysis was not conducted. All included studies had high risk of bias. In most studies the patient spectrum was not reflective of those who would present in clinical practice and no studies adequately reported how FAF images were interpreted. Although already in use in clinical practice, it is unclear whether or not FAF imaging is accurate, and whether or not it is applied and interpreted consistently for the diagnosis and/or monitoring of retinal conditions. Well-designed prospective primary research studies, which conform to the paradigm of diagnostic test accuracy assessment, are required to investigate the accuracy of FAF imaging in diagnosis and monitoring of inherited retinal dystrophies, early age-related macular degeneration, geographic atrophy and central serous chorioretinopathy. This study is registered as PROSPERO CRD42014014997. The National Institute for Health Research Health Technology Assessment programme.
NASA Astrophysics Data System (ADS)
Franco, Patrick; Ogier, Jean-Marc; Loonis, Pierre; Mullot, Rémy
Recently we have developed a model for shape description and matching. Based on minimum spanning trees construction and specifics stages like the mixture, it seems to have many desirable properties. Recognition invariance in front shift, rotated and noisy shape was checked through median scale tests related to GREC symbol reference database. Even if extracting the topology of a shape by mapping the shortest path connecting all the pixels seems to be powerful, the construction of graph induces an expensive algorithmic cost. In this article we discuss on the ways to reduce time computing. An alternative solution based on image compression concepts is provided and evaluated. The model no longer operates in the image space but in a compact space, namely the Discrete Cosine space. The use of block discrete cosine transform is discussed and justified. The experimental results led on the GREC2003 database show that the proposed method is characterized by a good discrimination power, a real robustness to noise with an acceptable time computing.
Online Reference Service--How to Begin: A Selected Bibliography.
ERIC Educational Resources Information Center
Shroder, Emelie J., Ed.
1982-01-01
Materials in this bibliography were selected and recommended by members of the Use of Machine-Assisted Reference in Public Libraries Committee, Reference and Adult Services Division, American Library Association. Topics include: financial aspects, equipment and communications considerations, comparing databases and database systems, advertising…
Image: changing how women nurses think about themselves. Literature review.
Fletcher, Karen
2007-05-01
This paper presents a review of the public and professional images of nursing in the literature and explores nurse image in the context of Strasen's self-image model. Nurses have struggled since the 1800s with the problem of image. What is known about nurses' image is from the perspective of others: the media, public or other healthcare professionals. Some hints of how nurses see themselves can be found in the literature that suggests how this image could be improved. A literature review for all dates up to 2006 was undertaken using PubMed, Medline and CINAHL databases. Additional references were identified from this literature. Sentinel articles and books were manually searched to identify key concepts. Search words used were nurse, nursing, image and self-image. The findings were examined using the framework of Strasen's self-image model. Public image appears to be intimately intertwined with nurse image. This creates the boundaries that confine and construct the image of nursing. As a profession, nurses do not have a very positive self-image nor do they think highly of themselves. Individually, each nurse has the power to shape the image of nursing. However, nurses must also work together to change the systems that perpetuate negative stereotypes of nurses' image.
Electronic Reference Library: Silverplatter's Database Networking Solution.
ERIC Educational Resources Information Center
Millea, Megan
Silverplatter's Electronic Reference Library (ERL) provides wide area network access to its databases using TCP/IP communications and client-server architecture. ERL has two main components: The ERL clients (retrieval interface) and the ERL server (search engines). ERL clients provide patrons with seamless access to multiple databases on multiple…
Jensen, Chad D; Duraccio, Kara M; Barnett, Kimberly A; Stevens, Kimberly S
2016-12-01
Research examining effects of visual food cues on appetite-related brain processes and eating behavior has proliferated. Recently investigators have developed food image databases for use across experimental studies examining appetite and eating behavior. The food-pics image database represents a standardized, freely available image library originally validated in a large sample primarily comprised of adults. The suitability of the images for use with adolescents has not been investigated. The aim of the present study was to evaluate the appropriateness of the food-pics image library for appetite and eating research with adolescents. Three hundred and seven adolescents (ages 12-17) provided ratings of recognizability, palatability, and desire to eat, for images from the food-pics database. Moreover, participants rated the caloric content (high vs. low) and healthiness (healthy vs. unhealthy) of each image. Adolescents rated approximately 75% of the food images as recognizable. Approximately 65% of recognizable images were correctly categorized as high vs. low calorie and 63% were correctly classified as healthy vs. unhealthy in 80% or more of image ratings. These results suggest that a smaller subset of the food-pics image database is appropriate for use with adolescents. With some modifications to included images, the food-pics image database appears to be appropriate for use in experimental appetite and eating-related research conducted with adolescents. Copyright © 2016 Elsevier Ltd. All rights reserved.
WOVOdat as a worldwide resource to improve eruption forecasts
NASA Astrophysics Data System (ADS)
Widiwijayanti, Christina; Costa, Fidel; Zar Win Nang, Thin; Tan, Karine; Newhall, Chris; Ratdomopurbo, Antonius
2015-04-01
During periods of volcanic unrest, volcanologists need to interpret signs of unrest to be able to forecast whether an eruption is likely to occur. Some volcanic eruptions display signs of impending eruption such as seismic activity, surface deformation, or gas emissions; but not all will give signs and not all signs are necessarily followed by an eruption. Volcanoes behave differently. Precursory signs of an eruption are sometimes very short, less than an hour, but can be also weeks, months, or even years. Some volcanoes are regularly active and closely monitored, while other aren't. Often, the record of precursors to historical eruptions of a volcano isn't enough to allow a forecast of its future activity. Therefore, volcanologists must refer to monitoring data of unrest and eruptions at similar volcanoes. WOVOdat is the World Organization of Volcano Observatories' Database of volcanic unrest - an international effort to develop common standards for compiling and storing data on volcanic unrests in a centralized database and freely web-accessible for reference during volcanic crises, comparative studies, and basic research on pre-eruption processes. WOVOdat will be to volcanology as an epidemiological database is to medicine. We have up to now incorporated about 15% of worldwide unrest data into WOVOdat, covering more than 100 eruption episodes, which includes: volcanic background data, eruptive histories, monitoring data (seismic, deformation, gas, hydrology, thermal, fields, and meteorology), monitoring metadata, and supporting data such as reports, images, maps and videos. Nearly all data in WOVOdat are time-stamped and geo-referenced. Along with creating a database on volcanic unrest, WOVOdat also developing web-tools to help users to query, visualize, and compare data, which further can be used for probabilistic eruption forecasting. Reference to WOVOdat will be especially helpful at volcanoes that have not erupted in historical or 'instrumental' time and thus for which no previous data exist. The more data in WOVOdat, the more useful it will be. We actively solicit relevant data contributions from volcano observatories, other institutions, and individual researchers. Detailed information and documentation about the database and how to use it can be found at www.wovodat.org.
NASA Technical Reports Server (NTRS)
2008-01-01
We can determine distances between objects and points of interest in 3-D space to a useful degree of accuracy from a set of camera images by using multiple camera views and reference targets in the camera s field of view (FOV). The core of the software processing is based on the previously developed foreign-object debris vision trajectory software (see KSC Research and Technology 2004 Annual Report, pp. 2 5). The current version of this photogrammetry software includes the ability to calculate distances between any specified point pairs, the ability to process any number of reference targets and any number of camera images, user-friendly editing features, including zoom in/out, translate, and load/unload, routines to help mark reference points with a Find function, while comparing them with the reference point database file, and a comprehensive output report in HTML format. In this system, scene reference targets are replaced by a photogrammetry cube whose exterior surface contains multiple predetermined precision 2-D targets. Precise measurement of the cube s 2-D targets during the fabrication phase eliminates the need for measuring 3-D coordinates of reference target positions in the camera's FOV, using for example a survey theodolite or a Faroarm. Placing the 2-D targets on the cube s surface required the development of precise machining methods. In response, 2-D targets were embedded into the surface of the cube and then painted black for high contrast. A 12-inch collapsible cube was developed for room-size scenes. A 3-inch, solid, stainless-steel photogrammetry cube was also fabricated for photogrammetry analysis of small objects.
A Circular Dichroism Reference Database for Membrane Proteins
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallace,B.; Wien, F.; Stone, T.
2006-01-01
Membrane proteins are a major product of most genomes and the target of a large number of current pharmaceuticals, yet little information exists on their structures because of the difficulty of crystallising them; hence for the most part they have been excluded from structural genomics programme targets. Furthermore, even methods such as circular dichroism (CD) spectroscopy which seek to define secondary structure have not been fully exploited because of technical limitations to their interpretation for membrane embedded proteins. Empirical analyses of circular dichroism (CD) spectra are valuable for providing information on secondary structures of proteins. However, the accuracy of themore » results depends on the appropriateness of the reference databases used in the analyses. Membrane proteins have different spectral characteristics than do soluble proteins as a result of the low dielectric constants of membrane bilayers relative to those of aqueous solutions (Chen & Wallace (1997) Biophys. Chem. 65:65-74). To date, no CD reference database exists exclusively for the analysis of membrane proteins, and hence empirical analyses based on current reference databases derived from soluble proteins are not adequate for accurate analyses of membrane protein secondary structures (Wallace et al (2003) Prot. Sci. 12:875-884). We have therefore created a new reference database of CD spectra of integral membrane proteins whose crystal structures have been determined. To date it contains more than 20 proteins, and spans the range of secondary structures from mostly helical to mostly sheet proteins. This reference database should enable more accurate secondary structure determinations of membrane embedded proteins and will become one of the reference database options in the CD calculation server DICHROWEB (Whitmore & Wallace (2004) NAR 32:W668-673).« less
LIRIS flight database and its use toward noncooperative rendezvous
NASA Astrophysics Data System (ADS)
Mongrard, O.; Ankersen, F.; Casiez, P.; Cavrois, B.; Donnard, A.; Vergnol, A.; Southivong, U.
2018-06-01
ESA's fifth and last Automated Transfer Vehicle, ATV Georges Lemaître, tested new rendezvous technology before docking with the International Space Station (ISS) in August 2014. The technology demonstration called Laser Infrared Imaging Sensors (LIRIS) provides an unseen view of the ISS. During Georges Lemaître's rendezvous, LIRIS sensors, composed of two infrared cameras, one visible camera, and a scanning LIDAR (Light Detection and Ranging), were turned on two and a half hours and 3500 m from the Space Station. All sensors worked as expected and a large amount of data was recorded and stored within ATV-5's cargo hold before being returned to Earth with the Soyuz flight 38S in September 2014. As a part of the LIRIS postflight activities, the information gathered by all sensors is collected inside a flight database together with the reference ATV trajectory and attitude estimated by ATV main navigation sensors. Although decoupled from the ATV main computer, the LIRIS data were carefully synchronized with ATV guidance, navigation, and control (GNC) data. Hence, the LIRIS database can be used to assess the performance of various image processing algorithms to provide range and line-of-sight (LoS) navigation at long/medium range but also 6 degree-of-freedom (DoF) navigation at short range. The database also contains information related to the overall ATV position with respect to Earth and the Sun direction within ATV frame such that the effect of the environment on the sensors can also be investigated. This paper introduces the structure of the LIRIS database and provides some example of applications to increase the technology readiness level of noncooperative rendezvous.
NASA Astrophysics Data System (ADS)
Xu, Xiaojiang; Rioux, Timothy P.; MacLeod, Tynan; Patel, Tejash; Rome, Maxwell N.; Potter, Adam W.
2017-03-01
The purpose of this paper is to develop a database of tissue composition, distribution, volume, surface area, and skin thickness from anatomically correct human models, the virtual family. These models were based on high-resolution magnetic resonance imaging (MRI) of human volunteers, including two adults (male and female) and two children (boy and girl). In the segmented image dataset, each voxel is associated with a label which refers to a tissue type that occupies up that specific cubic millimeter of the body. The tissue volume was calculated from the number of the voxels with the same label. Volumes of 24 organs in body and volumes of 7 tissues in 10 specific body regions were calculated. Surface area was calculated from the collection of voxels that are touching the exterior air. Skin thicknesses were estimated from its volume and surface area. The differences between the calculated and original masses were about 3 % or less for tissues or organs that are important to thermoregulatory modeling, e.g., muscle, skin, and fat. This accurate database of body tissue distributions and geometry is essential for the development of human thermoregulatory models. Data derived from medical imaging provide new effective tools to enhance thermal physiology research and gain deeper insight into the mechanisms of how the human body maintains heat balance.
Lin, Hongli; Wang, Weisheng; Luo, Jiawei; Yang, Xuedong
2014-12-01
The aim of this study was to develop a personalized training system using the Lung Image Database Consortium (LIDC) and Image Database resource Initiative (IDRI) Database, because collecting, annotating, and marking a large number of appropriate computed tomography (CT) scans, and providing the capability of dynamically selecting suitable training cases based on the performance levels of trainees and the characteristics of cases are critical for developing a efficient training system. A novel approach is proposed to develop a personalized radiology training system for the interpretation of lung nodules in CT scans using the Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) database, which provides a Content-Boosted Collaborative Filtering (CBCF) algorithm for predicting the difficulty level of each case of each trainee when selecting suitable cases to meet individual needs, and a diagnostic simulation tool to enable trainees to analyze and diagnose lung nodules with the help of an image processing tool and a nodule retrieval tool. Preliminary evaluation of the system shows that developing a personalized training system for interpretation of lung nodules is needed and useful to enhance the professional skills of trainees. The approach of developing personalized training systems using the LIDC/IDRL database is a feasible solution to the challenges of constructing specific training program in terms of cost and training efficiency. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
Yunker, Bryan E.; Cordes, Dietmar; Scherzinger, Ann L.; Dodd, Gerald D.; Shandas, Robin; Feng, Yusheng; Hunter, Kendall S.
2013-01-01
Purpose: This study investigated the ultrasound, MRI, and CT imaging characteristics of several industrial casting and molding compounds as a precursor to the future development of durable and anatomically correct flow phantoms. Methods: A set of usability and performance criteria was established for a proposed phantom design capable of supporting liquid flow during imaging. A literature search was conducted to identify the materials and methods previously used in phantom fabrication. A database of human tissue and casting material properties was compiled to facilitate the selection of appropriate materials for testing. Several industrial casting materials were selected, procured, and used to fabricate test samples that were imaged with ultrasound, MRI, and CT. Results: Five silicones and one polyurethane were selected for testing. Samples of all materials were successfully fabricated. All imaging modalities were able to discriminate between the materials tested. Ultrasound testing showed that three of the silicones could be imaged to a depth of at least 2.5 cm (1 in.). The RP-6400 polyurethane exhibited excellent contrast and edge detail for MRI phantoms and appears to be an excellent water reference for CT applications. The 10T and 27T silicones appear to be usable water references for MRI imaging. Conclusions: Based on study data and the stated selection criteria, the P-4 silicone provided sufficient material contrast to water and edge detail for use across all imaging modalities with the benefits of availability, low cost, dimensional stability, nontoxic, nonflammable, durable, cleanable, and optical clarity. The physical and imaging differences of the materials documented in this study may be useful for other applications. PMID:23635298
Holm, Sven; Russell, Greg; Nourrit, Vincent; McLoughlin, Niall
2017-01-01
A database of retinal fundus images, the DR HAGIS database, is presented. This database consists of 39 high-resolution color fundus images obtained from a diabetic retinopathy screening program in the UK. The NHS screening program uses service providers that employ different fundus and digital cameras. This results in a range of different image sizes and resolutions. Furthermore, patients enrolled in such programs often display other comorbidities in addition to diabetes. Therefore, in an effort to replicate the normal range of images examined by grading experts during screening, the DR HAGIS database consists of images of varying image sizes and resolutions and four comorbidity subgroups: collectively defined as the diabetic retinopathy, hypertension, age-related macular degeneration, and Glaucoma image set (DR HAGIS). For each image, the vasculature has been manually segmented to provide a realistic set of images on which to test automatic vessel extraction algorithms. Modified versions of two previously published vessel extraction algorithms were applied to this database to provide some baseline measurements. A method based purely on the intensity of images pixels resulted in a mean segmentation accuracy of 95.83% ([Formula: see text]), whereas an algorithm based on Gabor filters generated an accuracy of 95.71% ([Formula: see text]).
CHRONIS: an animal chromosome image database.
Toyabe, Shin-Ichi; Akazawa, Kouhei; Fukushi, Daisuke; Fukui, Kiichi; Ushiki, Tatsuo
2005-01-01
We have constructed a database system named CHRONIS (CHROmosome and Nano-Information System) to collect images of animal chromosomes and related nanotechnological information. CHRONIS enables rapid sharing of information on chromosome research among cell biologists and researchers in other fields via the Internet. CHRONIS is also intended to serve as a liaison tool for researchers who work in different centers. The image database contains more than 3,000 color microscopic images, including karyotypic images obtained from more than 1,000 species of animals. Researchers can browse the contents of the database using a usual World Wide Web interface in the following URL: http://chromosome.med.niigata-u.ac.jp/chronis/servlet/chronisservlet. The system enables users to input new images into the database, to locate images of interest by keyword searches, and to display the images with detailed information. CHRONIS has a wide range of applications, such as searching for appropriate probes for fluorescent in situ hybridization, comparing various kinds of microscopic images of a single species, and finding researchers working in the same field of interest.
Content Based Image Matching for Planetary Science
NASA Astrophysics Data System (ADS)
Deans, M. C.; Meyer, C.
2006-12-01
Planetary missions generate large volumes of data. With the MER rovers still functioning on Mars, PDS contains over 7200 released images from the Microscopic Imagers alone. These data products are only searchable by keys such as the Sol, spacecraft clock, or rover motion counter index, with little connection to the semantic content of the images. We have developed a method for matching images based on the visual textures in images. For every image in a database, a series of filters compute the image response to localized frequencies and orientations. Filter responses are turned into a low dimensional descriptor vector, generating a 37 dimensional fingerprint. For images such as the MER MI, this represents a compression ratio of 99.9965% (the fingerprint is approximately 0.0035% the size of the original image). At query time, fingerprints are quickly matched to find images with similar appearance. Image databases containing several thousand images are preprocessed offline in a matter of hours. Image matches from the database are found in a matter of seconds. We have demonstrated this image matching technique using three sources of data. The first database consists of 7200 images from the MER Microscopic Imager. The second database consists of 3500 images from the Narrow Angle Mars Orbital Camera (MOC-NA), which were cropped into 1024×1024 sub-images for consistency. The third database consists of 7500 scanned archival photos from the Apollo Metric Camera. Example query results from all three data sources are shown. We have also carried out user tests to evaluate matching performance by hand labeling results. User tests verify approximately 20% false positive rate for the top 14 results for MOC NA and MER MI data. This means typically 10 to 12 results out of 14 match the query image sufficiently. This represents a powerful search tool for databases of thousands of images where the a priori match probability for an image might be less than 1%. Qualitatively, correct matches can also be confirmed by verifying MI images taken in the same z-stack, or MOC image tiles taken from the same image strip. False negatives are difficult to quantify as it would mean finding matches in the database of thousands of images that the algorithm did not detect.
Quebec Trophoblastic Disease Registry: how to make an easy-to-use dynamic database.
Sauthier, Philippe; Breguet, Magali; Rozenholc, Alexandre; Sauthier, Michaël
2015-05-01
To create an easy-to-use dynamic database designed specifically for the Quebec Trophoblastic Disease Registry (RMTQ). It is now well established that much of the success in managing trophoblastic diseases comes from the development of national and regional reference centers. Computerized databases allow the optimal use of data stored in these centers. We have created an electronic data registration system by producing a database using FileMaker Pro 12. It uses 11 external tables associated with a unique identification number for each patient. Each table allows specific data to be recorded, incorporating demographics, diagnosis, automated staging, laboratory values, pathological diagnosis, and imaging parameters. From January 1, 2009, to December 31, 2013, we used our database to register 311 patients with 380 diseases and have seen a 39.2% increase in registrations each year between 2009 and 2012. This database allows the automatic generation of semilogarithmic curves, which take into account β-hCG values as a function of time, complete with graphic markers for applied treatments (chemotherapy, radiotherapy, or surgery). It generates a summary sheet for a synthetic vision in real time. We have created, at a low cost, an easy-to-use database specific to trophoblastic diseases that dynamically integrates staging and monitoring. We propose a 10-step procedure for a successful trophoblastic database. It improves patient care, research, and education on trophoblastic diseases in Quebec and leads to an opportunity for collaboration on a national Canadian registry.
Microcomputer-Based Access to Machine-Readable Numeric Databases.
ERIC Educational Resources Information Center
Wenzel, Patrick
1988-01-01
Describes the use of microcomputers and relational database management systems to improve access to numeric databases by the Data and Program Library Service at the University of Wisconsin. The internal records management system, in-house reference tools, and plans to extend these tools to the entire campus are discussed. (3 references) (CLB)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herrmann, W.; von Laven, G.M.; Parker, T.
1993-09-01
The Bibliographic Retrieval System (BARS) is a data base management system specially designed to retrieve bibliographic references. Two databases are available, (i) the Sandia Shock Compression (SSC) database which contains over 5700 references to the literature related to stress waves in solids and their applications, and (ii) the Shock Physics Index (SPHINX) which includes over 8000 further references to stress waves in solids, material properties at intermediate and low rates, ballistic and hypervelocity impact, and explosive or shock fabrication methods. There is some overlap in the information in the two data bases.
Retrieving high-resolution images over the Internet from an anatomical image database
NASA Astrophysics Data System (ADS)
Strupp-Adams, Annette; Henderson, Earl
1999-12-01
The Visible Human Data set is an important contribution to the national collection of anatomical images. To enhance the availability of these images, the National Library of Medicine has supported the design and development of a prototype object-oriented image database which imports, stores, and distributes high resolution anatomical images in both pixel and voxel formats. One of the key database modules is its client-server Internet interface. This Web interface provides a query engine with retrieval access to high-resolution anatomical images that range in size from 100KB for browser viewable rendered images, to 1GB for anatomical structures in voxel file formats. The Web query and retrieval client-server system is composed of applet GUIs, servlets, and RMI application modules which communicate with each other to allow users to query for specific anatomical structures, and retrieve image data as well as associated anatomical images from the database. Selected images can be downloaded individually as single files via HTTP or downloaded in batch-mode over the Internet to the user's machine through an applet that uses Netscape's Object Signing mechanism. The image database uses ObjectDesign's object-oriented DBMS, ObjectStore that has a Java interface. The query and retrieval systems has been tested with a Java-CDE window system, and on the x86 architecture using Windows NT 4.0. This paper describes the Java applet client search engine that queries the database; the Java client module that enables users to view anatomical images online; the Java application server interface to the database which organizes data returned to the user, and its distribution engine that allow users to download image files individually and/or in batch-mode.
Solvent replacement for green processing.
Sherman, J; Chin, B; Huibers, P D; Garcia-Valls, R; Hatton, T A
1998-01-01
The implementation of the Montreal Protocol, the Clean Air Act, and the Pollution Prevention Act of 1990 has resulted in increased awareness of organic solvent use in chemical processing. The advances made in the search to find "green" replacements for traditional solvents are reviewed, with reference to solvent alternatives for cleaning, coatings, and chemical reaction and separation processes. The development of solvent databases and computational methods that aid in the selection and/or design of feasible or optimal environmentally benign solvent alternatives for specific applications is also discussed. Images Figure 2 Figure 3 PMID:9539018
Seeing is believing: on the use of image databases for visually exploring plant organelle dynamics.
Mano, Shoji; Miwa, Tomoki; Nishikawa, Shuh-ichi; Mimura, Tetsuro; Nishimura, Mikio
2009-12-01
Organelle dynamics vary dramatically depending on cell type, developmental stage and environmental stimuli, so that various parameters, such as size, number and behavior, are required for the description of the dynamics of each organelle. Imaging techniques are superior to other techniques for describing organelle dynamics because these parameters are visually exhibited. Therefore, as the results can be seen immediately, investigators can more easily grasp organelle dynamics. At present, imaging techniques are emerging as fundamental tools in plant organelle research, and the development of new methodologies to visualize organelles and the improvement of analytical tools and equipment have allowed the large-scale generation of image and movie data. Accordingly, image databases that accumulate information on organelle dynamics are an increasingly indispensable part of modern plant organelle research. In addition, image databases are potentially rich data sources for computational analyses, as image and movie data reposited in the databases contain valuable and significant information, such as size, number, length and velocity. Computational analytical tools support image-based data mining, such as segmentation, quantification and statistical analyses, to extract biologically meaningful information from each database and combine them to construct models. In this review, we outline the image databases that are dedicated to plant organelle research and present their potential as resources for image-based computational analyses.
Comparison of photo-matching algorithms commonly used for photographic capture-recapture studies.
Matthé, Maximilian; Sannolo, Marco; Winiarski, Kristopher; Spitzen-van der Sluijs, Annemarieke; Goedbloed, Daniel; Steinfartz, Sebastian; Stachow, Ulrich
2017-08-01
Photographic capture-recapture is a valuable tool for obtaining demographic information on wildlife populations due to its noninvasive nature and cost-effectiveness. Recently, several computer-aided photo-matching algorithms have been developed to more efficiently match images of unique individuals in databases with thousands of images. However, the identification accuracy of these algorithms can severely bias estimates of vital rates and population size. Therefore, it is important to understand the performance and limitations of state-of-the-art photo-matching algorithms prior to implementation in capture-recapture studies involving possibly thousands of images. Here, we compared the performance of four photo-matching algorithms; Wild-ID, I3S Pattern+, APHIS, and AmphIdent using multiple amphibian databases of varying image quality. We measured the performance of each algorithm and evaluated the performance in relation to database size and the number of matching images in the database. We found that algorithm performance differed greatly by algorithm and image database, with recognition rates ranging from 100% to 22.6% when limiting the review to the 10 highest ranking images. We found that recognition rate degraded marginally with increased database size and could be improved considerably with a higher number of matching images in the database. In our study, the pixel-based algorithm of AmphIdent exhibited superior recognition rates compared to the other approaches. We recommend carefully evaluating algorithm performance prior to using it to match a complete database. By choosing a suitable matching algorithm, databases of sizes that are unfeasible to match "by eye" can be easily translated to accurate individual capture histories necessary for robust demographic estimates.
USDA-ARS?s Scientific Manuscript database
The sodium concentration (mg/100g) for 23 of 125 Sentinel Foods were identified in the 2009 CDC Packaged Food Database (PFD) and compared with data in the USDA’s 2013 Standard Reference 26 (SR 26) database. Sentinel Foods are foods and beverages identified by USDA to be monitored as primary indicat...
Neurology diagnostics security and terminal adaptation for PocketNeuro project.
Chemak, C; Bouhlel, M-S; Lapayre, J-C
2008-09-01
This paper presents new approaches of medical information security and terminal mobile phone adaptation for the PocketNeuro project. The latter term refers to a project created for the management of neurological diseases. It consists of transmitting information about patients ("desk of patients") to a doctor's mobile phone during a visit and examination of a patient. These new approaches for the PocketNeuro project were analyzed in terms of medical information security and adaptation of the diagnostic images to the doctor's mobile phone. Images were extracted from a DICOM library. Matlab and its library were used as software to test our approaches and to validate our results. Experiments performed on a database of 30 256 x 256 pixel-sized neuronal medical images indicated that our new approaches for PocketNeuro project are valid and support plans for large-scale studies between French and Swiss hospitals using secured connections.
Document creation, linking, and maintenance system
Claghorn, Ronald [Pasco, WA
2011-02-15
A document creation and citation system designed to maintain a database of reference documents. The content of a selected document may be automatically scanned and indexed by the system. The selected documents may also be manually indexed by a user prior to the upload. The indexed documents may be uploaded and stored within a database for later use. The system allows a user to generate new documents by selecting content within the reference documents stored within the database and inserting the selected content into a new document. The system allows the user to customize and augment the content of the new document. The system also generates citations to the selected content retrieved from the reference documents. The citations may be inserted into the new document in the appropriate location and format, as directed by the user. The new document may be uploaded into the database and included with the other reference documents. The system also maintains the database of reference documents so that when changes are made to a reference document, the author of a document referencing the changed document will be alerted to make appropriate changes to his document. The system also allows visual comparison of documents so that the user may see differences in the text of the documents.
Selecting a database for literature searches in nursing: MEDLINE or CINAHL?
Brazier, H; Begley, C M
1996-10-01
This study compares the usefulness of the MEDLINE and CINAHL databases for students on post-registration nursing courses. We searched for nine topics, using title words only. Identical searches of the two databases retrieved 1162 references, of which 88% were in MEDLINE, 33% in CINAHL and 20% in both sources. The relevance of the references was assessed by student reviewers. The positive predictive value of CINAHL (70%) was higher than that of MEDLINE (54%), but MEDLINE produced more than twice as many relevant references as CINAHL. The sensitivity of MEDLINE was 85% (95% CI 82-88%), and that of CINAHL was 41% (95% CI 37-45%). To assess the ease of obtaining the references, we developed an index of accessibility, based on the holdings of a number of Irish and British libraries. Overall, 47% of relevant references were available in the students' own library, and 64% could be obtained within 48 hours. There was no difference between the two databases overall, but when two topics relating specifically to the organization of nursing were excluded, references found in MEDLINE were significantly more accessible. We recommend that MEDLINE should be regarded as the first choice of bibliographic database for any subject other than one related strictly to the organization of nursing.
Referenceless perceptual fog density prediction model
NASA Astrophysics Data System (ADS)
Choi, Lark Kwon; You, Jaehee; Bovik, Alan C.
2014-02-01
We propose a perceptual fog density prediction model based on natural scene statistics (NSS) and "fog aware" statistical features, which can predict the visibility in a foggy scene from a single image without reference to a corresponding fogless image, without side geographical camera information, without training on human-rated judgments, and without dependency on salient objects such as lane markings or traffic signs. The proposed fog density predictor only makes use of measurable deviations from statistical regularities observed in natural foggy and fog-free images. A fog aware collection of statistical features is derived from a corpus of foggy and fog-free images by using a space domain NSS model and observed characteristics of foggy images such as low contrast, faint color, and shifted intensity. The proposed model not only predicts perceptual fog density for the entire image but also provides a local fog density index for each patch. The predicted fog density of the model correlates well with the measured visibility in a foggy scene as measured by judgments taken in a human subjective study on a large foggy image database. As one application, the proposed model accurately evaluates the performance of defog algorithms designed to enhance the visibility of foggy images.
NASA Astrophysics Data System (ADS)
Chen, J.; Wang, D.; Zhao, R. L.; Zhang, H.; Liao, A.; Jiu, J.
2014-04-01
Geospatial databases are irreplaceable national treasure of immense importance. Their up-to-dateness referring to its consistency with respect to the real world plays a critical role in its value and applications. The continuous updating of map databases at 1:50,000 scales is a massive and difficult task for larger countries of the size of more than several million's kilometer squares. This paper presents the research and technological development to support the national map updating at 1:50,000 scales in China, including the development of updating models and methods, production tools and systems for large-scale and rapid updating, as well as the design and implementation of the continuous updating workflow. The use of many data sources and the integration of these data to form a high accuracy, quality checked product were required. It had in turn required up to date techniques of image matching, semantic integration, generalization, data base management and conflict resolution. Design and develop specific software tools and packages to support the large-scale updating production with high resolution imagery and large-scale data generalization, such as map generalization, GIS-supported change interpretation from imagery, DEM interpolation, image matching-based orthophoto generation, data control at different levels. A national 1:50,000 databases updating strategy and its production workflow were designed, including a full coverage updating pattern characterized by all element topographic data modeling, change detection in all related areas, and whole process data quality controlling, a series of technical production specifications, and a network of updating production units in different geographic places in the country.
Thermal Protection System Imagery Inspection Management System -TIIMS
NASA Technical Reports Server (NTRS)
Goza, Sharon; Melendrez, David L.; Henningan, Marsha; LaBasse, Daniel; Smith, Daniel J.
2011-01-01
TIIMS is used during the inspection phases of every mission to provide quick visual feedback, detailed inspection data, and determination to the mission management team. This system consists of a visual Web page interface, an SQL database, and a graphical image generator. These combine to allow a user to ascertain quickly the status of the inspection process, and current determination of any problem zones. The TIIMS system allows inspection engineers to enter their determinations into a database and to link pertinent images and video to those database entries. The database then assigns criteria to each zone and tile, and via query, sends the information to a graphical image generation program. Using the official TIPS database tile positions and sizes, the graphical image generation program creates images of the current status of the orbiter, coloring zones, and tiles based on a predefined key code. These images are then displayed on a Web page using customized JAVA scripts to display the appropriate zone of the orbiter based on the location of the user's cursor. The close-up graphic and database entry for that particular zone can then be seen by selecting the zone. This page contains links into the database to access the images used by the inspection engineer when they make the determination entered into the database. Status for the inspection zones changes as determinations are refined and shown by the appropriate color code.
Brunger, Fern; Welch, Vivian; Asghari, Shabnam; Kaposy, Chris
2018-01-01
Background This paper focuses on the collision of three factors: a growing emphasis on sharing research through open access publication, an increasing awareness of big data and its potential uses, and an engaged public interested in the privacy and confidentiality of their personal health information. One conceptual space where this collision is brought into sharp relief is with the open availability of patient medical photographs from peer-reviewed journal articles in the search results of online image databases such as Google Images. Objective The aim of this study was to assess the availability of patient medical photographs from published journal articles in Google Images search results and the factors impacting this availability. Methods We conducted a cross-sectional study using data from an evidence map of research with transgender, gender non-binary, and other gender diverse (trans) participants. For the original evidence map, a comprehensive search of 15 academic databases was developed in collaboration with a health sciences librarian. Initial search results produced 25,230 references after duplicates were removed. Eligibility criteria were established to include empirical research of any design that included trans participants or their personal information and that was published in English in peer-reviewed journals. We identified all articles published between 2008 and 2015 with medical photographs of trans participants. For each reference, images were individually numbered in order to track the total number of medical photographs. We used odds ratios (OR) to assess the association between availability of the clinical photograph on Google Images and the following factors: whether the article was openly available online (open access, Researchgate.net, or Academia.edu), whether the article included genital images, if the photographs were published in color, and whether the photographs were located on the journal article landing page. Results We identified 94 articles with medical photographs of trans participants, including a total of 605 photographs. Of the 94 publications, 35 (37%) included at least one medical photograph that was found on Google Images. The ability to locate the article freely online contributes to the availability of at least one image from the article on Google Images (OR 2.99, 95% CI 1.20-7.45). Conclusions This is the first study to document the existence of medical photographs from peer-reviewed journals appearing in Google Images search results. Almost all of the images we searched for included sensitive photographs of patient genitals, chests, or breasts. Given that it is unlikely that patients consented to sharing their personal health information in these ways, this constitutes a risk to patient privacy. Based on the impact of current practices, revisions to informed consent policies and guidelines are required. PMID:29483069
Using an image-extended relational database to support content-based image retrieval in a PACS.
Traina, Caetano; Traina, Agma J M; Araújo, Myrian R B; Bueno, Josiane M; Chino, Fabio J T; Razente, Humberto; Azevedo-Marques, Paulo M
2005-12-01
This paper presents a new Picture Archiving and Communication System (PACS), called cbPACS, which has content-based image retrieval capabilities. The cbPACS answers range and k-nearest- neighbor similarity queries, employing a relational database manager extended to support images. The images are compared through their features, which are extracted by an image-processing module and stored in the extended relational database. The database extensions were developed aiming at efficiently answering similarity queries by taking advantage of specialized indexing methods. The main concept supporting the extensions is the definition, inside the relational manager, of distance functions based on features extracted from the images. An extension to the SQL language enables the construction of an interpreter that intercepts the extended commands and translates them to standard SQL, allowing any relational database server to be used. By now, the system implemented works on features based on color distribution of the images through normalized histograms as well as metric histograms. Metric histograms are invariant regarding scale, translation and rotation of images and also to brightness transformations. The cbPACS is prepared to integrate new image features, based on texture and shape of the main objects in the image.
A Four-Dimensional Probabilistic Atlas of the Human Brain
Mazziotta, John; Toga, Arthur; Evans, Alan; Fox, Peter; Lancaster, Jack; Zilles, Karl; Woods, Roger; Paus, Tomas; Simpson, Gregory; Pike, Bruce; Holmes, Colin; Collins, Louis; Thompson, Paul; MacDonald, David; Iacoboni, Marco; Schormann, Thorsten; Amunts, Katrin; Palomero-Gallagher, Nicola; Geyer, Stefan; Parsons, Larry; Narr, Katherine; Kabani, Noor; Le Goualher, Georges; Feidler, Jordan; Smith, Kenneth; Boomsma, Dorret; Pol, Hilleke Hulshoff; Cannon, Tyrone; Kawashima, Ryuta; Mazoyer, Bernard
2001-01-01
The authors describe the development of a four-dimensional atlas and reference system that includes both macroscopic and microscopic information on structure and function of the human brain in persons between the ages of 18 and 90 years. Given the presumed large but previously unquantified degree of structural and functional variance among normal persons in the human population, the basis for this atlas and reference system is probabilistic. Through the efforts of the International Consortium for Brain Mapping (ICBM), 7,000 subjects will be included in the initial phase of database and atlas development. For each subject, detailed demographic, clinical, behavioral, and imaging information is being collected. In addition, 5,800 subjects will contribute DNA for the purpose of determining genotype– phenotype–behavioral correlations. The process of developing the strategies, algorithms, data collection methods, validation approaches, database structures, and distribution of results is described in this report. Examples of applications of the approach are described for the normal brain in both adults and children as well as in patients with schizophrenia. This project should provide new insights into the relationship between microscopic and macroscopic structure and function in the human brain and should have important implications in basic neuroscience, clinical diagnostics, and cerebral disorders. PMID:11522763
PROTAX-Sound: A probabilistic framework for automated animal sound identification
Somervuo, Panu; Ovaskainen, Otso
2017-01-01
Autonomous audio recording is stimulating new field in bioacoustics, with a great promise for conducting cost-effective species surveys. One major current challenge is the lack of reliable classifiers capable of multi-species identification. We present PROTAX-Sound, a statistical framework to perform probabilistic classification of animal sounds. PROTAX-Sound is based on a multinomial regression model, and it can utilize as predictors any kind of sound features or classifications produced by other existing algorithms. PROTAX-Sound combines audio and image processing techniques to scan environmental audio files. It identifies regions of interest (a segment of the audio file that contains a vocalization to be classified), extracts acoustic features from them and compares with samples in a reference database. The output of PROTAX-Sound is the probabilistic classification of each vocalization, including the possibility that it represents species not present in the reference database. We demonstrate the performance of PROTAX-Sound by classifying audio from a species-rich case study of tropical birds. The best performing classifier achieved 68% classification accuracy for 200 bird species. PROTAX-Sound improves the classification power of current techniques by combining information from multiple classifiers in a manner that yields calibrated classification probabilities. PMID:28863178
PROTAX-Sound: A probabilistic framework for automated animal sound identification.
de Camargo, Ulisses Moliterno; Somervuo, Panu; Ovaskainen, Otso
2017-01-01
Autonomous audio recording is stimulating new field in bioacoustics, with a great promise for conducting cost-effective species surveys. One major current challenge is the lack of reliable classifiers capable of multi-species identification. We present PROTAX-Sound, a statistical framework to perform probabilistic classification of animal sounds. PROTAX-Sound is based on a multinomial regression model, and it can utilize as predictors any kind of sound features or classifications produced by other existing algorithms. PROTAX-Sound combines audio and image processing techniques to scan environmental audio files. It identifies regions of interest (a segment of the audio file that contains a vocalization to be classified), extracts acoustic features from them and compares with samples in a reference database. The output of PROTAX-Sound is the probabilistic classification of each vocalization, including the possibility that it represents species not present in the reference database. We demonstrate the performance of PROTAX-Sound by classifying audio from a species-rich case study of tropical birds. The best performing classifier achieved 68% classification accuracy for 200 bird species. PROTAX-Sound improves the classification power of current techniques by combining information from multiple classifiers in a manner that yields calibrated classification probabilities.
NASA Astrophysics Data System (ADS)
Gundreddy, Rohith Reddy; Tan, Maxine; Qui, Yuchen; Zheng, Bin
2015-03-01
The purpose of this study is to develop and test a new content-based image retrieval (CBIR) scheme that enables to achieve higher reproducibility when it is implemented in an interactive computer-aided diagnosis (CAD) system without significantly reducing lesion classification performance. This is a new Fourier transform based CBIR algorithm that determines image similarity of two regions of interest (ROI) based on the difference of average regional image pixel value distribution in two Fourier transform mapped images under comparison. A reference image database involving 227 ROIs depicting the verified soft-tissue breast lesions was used. For each testing ROI, the queried lesion center was systematically shifted from 10 to 50 pixels to simulate inter-user variation of querying suspicious lesion center when using an interactive CAD system. The lesion classification performance and reproducibility as the queried lesion center shift were assessed and compared among the three CBIR schemes based on Fourier transform, mutual information and Pearson correlation. Each CBIR scheme retrieved 10 most similar reference ROIs and computed a likelihood score of the queried ROI depicting a malignant lesion. The experimental results shown that three CBIR schemes yielded very comparable lesion classification performance as measured by the areas under ROC curves with the p-value greater than 0.498. However, the CBIR scheme using Fourier transform yielded the highest invariance to both queried lesion center shift and lesion size change. This study demonstrated the feasibility of improving robustness of the interactive CAD systems by adding a new Fourier transform based image feature to CBIR schemes.
Reduction of metal artifacts in x-ray CT images using a convolutional neural network
NASA Astrophysics Data System (ADS)
Zhang, Yanbo; Chu, Ying; Yu, Hengyong
2017-09-01
Patients usually contain various metallic implants (e.g. dental fillings, prostheses), causing severe artifacts in the x-ray CT images. Although a large number of metal artifact reduction (MAR) methods have been proposed in the past four decades, MAR is still one of the major problems in clinical x-ray CT. In this work, we develop a convolutional neural network (CNN) based MAR framework, which combines the information from the original and corrected images to suppress artifacts. Before the MAR, we generate a group of data and train a CNN. First, we numerically simulate various metal artifacts cases and build a dataset, which includes metal-free images (used as references), metal-inserted images and various MAR methods corrected images. Then, ten thousands patches are extracted from the databased to train the metal artifact reduction CNN. In the MAR stage, the original image and two corrected images are stacked as a three-channel input image for CNN, and a CNN image is generated with less artifacts. The water equivalent regions in the CNN image are set to a uniform value to yield a CNN prior, whose forward projections are used to replace the metal affected projections, followed by the FBP reconstruction. Experimental results demonstrate the superior metal artifact reduction capability of the proposed method to its competitors.
JPEG2000 and dissemination of cultural heritage over the Internet.
Politou, Eugenia A; Pavlidis, George P; Chamzas, Christodoulos
2004-03-01
By applying the latest technologies in image compression for managing the storage of massive image data within cultural heritage databases and by exploiting the universality of the Internet we are now able not only to effectively digitize, record and preserve, but also to promote the dissemination of cultural heritage. In this work we present an application of the latest image compression standard JPEG2000 in managing and browsing image databases, focusing on the image transmission aspect rather than database management and indexing. We combine the technologies of JPEG2000 image compression with client-server socket connections and client browser plug-in, as to provide with an all-in-one package for remote browsing of JPEG2000 compressed image databases, suitable for the effective dissemination of cultural heritage.
Direct-to-digital holography reduction of reference hologram noise and fourier space smearing
Voelkl, Edgar
2006-06-27
Systems and methods are described for reduction of reference hologram noise and reduction of Fourier space smearing, especially in the context of direct-to-digital holography (off-axis interferometry). A method of reducing reference hologram noise includes: recording a plurality of reference holograms; processing the plurality of reference holograms into a corresponding plurality of reference image waves; and transforming the corresponding plurality of reference image waves into a reduced noise reference image wave. A method of reducing smearing in Fourier space includes: recording a plurality of reference holograms; processing the plurality of reference holograms into a corresponding plurality of reference complex image waves; transforming the corresponding plurality of reference image waves into a reduced noise reference complex image wave; recording a hologram of an object; processing the hologram of the object into an object complex image wave; and dividing the complex image wave of the object by the reduced noise reference complex image wave to obtain a reduced smearing object complex image wave.
Automatic and quantitative measurement of laryngeal video stroboscopic images.
Kuo, Chung-Feng Jeffrey; Kuo, Joseph; Hsiao, Shang-Wun; Lee, Chi-Lung; Lee, Jih-Chin; Ke, Bo-Han
2017-01-01
The laryngeal video stroboscope is an important instrument for physicians to analyze abnormalities and diseases in the glottal area. Stroboscope has been widely used around the world. However, without quantized indices, physicians can only make subjective judgment on glottal images. We designed a new laser projection marking module and applied it onto the laryngeal video stroboscope to provide scale conversion reference parameters for glottal imaging and to convert the physiological parameters of glottis. Image processing technology was used to segment the important image regions of interest. Information of the glottis was quantified, and the vocal fold image segmentation system was completed to assist clinical diagnosis and increase accuracy. Regarding image processing, histogram equalization was used to enhance glottis image contrast. The center weighted median filters image noise while retaining the texture of the glottal image. Statistical threshold determination was used for automatic segmentation of a glottal image. As the glottis image contains saliva and light spots, which are classified as the noise of the image, noise was eliminated by erosion, expansion, disconnection, and closure techniques to highlight the vocal area. We also used image processing to automatically identify an image of vocal fold region in order to quantify information from the glottal image, such as glottal area, vocal fold perimeter, vocal fold length, glottal width, and vocal fold angle. The quantized glottis image database was created to assist physicians in diagnosing glottis diseases more objectively.
Varrone, Andrea; Dickson, John C; Tossici-Bolt, Livia; Sera, Terez; Asenbaum, Susanne; Booij, Jan; Kapucu, Ozlem L; Kluge, Andreas; Knudsen, Gitte M; Koulibaly, Pierre Malick; Nobili, Flavio; Pagani, Marco; Sabri, Osama; Vander Borght, Thierry; Van Laere, Koen; Tatsch, Klaus
2013-01-01
Dopamine transporter (DAT) imaging with [(123)I]FP-CIT (DaTSCAN) is an established diagnostic tool in parkinsonism and dementia. Although qualitative assessment criteria are available, DAT quantification is important for research and for completion of a diagnostic evaluation. One critical aspect of quantification is the availability of normative data, considering possible age and gender effects on DAT availability. The aim of the European Normal Control Database of DaTSCAN (ENC-DAT) study was to generate a large database of [(123)I]FP-CIT SPECT scans in healthy controls. SPECT data from 139 healthy controls (74 men, 65 women; age range 20-83 years, mean 53 years) acquired in 13 different centres were included. Images were reconstructed using the ordered-subset expectation-maximization algorithm without correction (NOACSC), with attenuation correction (AC), and with both attenuation and scatter correction using the triple-energy window method (ACSC). Region-of-interest analysis was performed using the BRASS software (caudate and putamen), and the Southampton method (striatum). The outcome measure was the specific binding ratio (SBR). A significant effect of age on SBR was found for all data. Gender had a significant effect on SBR in the caudate and putamen for the NOACSC and AC data, and only in the left caudate for the ACSC data (BRASS method). Significant effects of age and gender on striatal SBR were observed for all data analysed with the Southampton method. Overall, there was a significant age-related decline in SBR of between 4 % and 6.7 % per decade. This study provides a large database of [(123)I]FP-CIT SPECT scans in healthy controls across a wide age range and with balanced gender representation. Higher DAT availability was found in women than in men. An average age-related decline in DAT availability of 5.5 % per decade was found for both genders, in agreement with previous reports. The data collected in this study may serve as a reference database for nuclear medicine centres and for clinical trials using [(123)I]FP-CIT SPECT as the imaging marker.
Interhospital network system using the worldwide web and the common gateway interface.
Oka, A; Harima, Y; Nakano, Y; Tanaka, Y; Watanabe, A; Kihara, H; Sawada, S
1999-05-01
We constructed an interhospital network system using the worldwide web (WWW) and the Common Gateway Interface (CGI). Original clinical images are digitized and stored as a database for educational and research purposes. Personal computers (PCs) are available for data treatment and browsing. Our system is simple, as digitized images are stored into a Unix server machine. Images of important and interesting clinical cases are selected and registered into the image database using CGI. The main image format is 8- or 12-bit Joint Photographic Experts Group (JPEG) image. Original clinical images are finally stored in CD-ROM using a CD recorder. The image viewer can browse all of the images for one case at once as thumbnail pictures; image quality can be selected depending on the user's purpose. Using the network system, clinical images of interesting cases can be rapidly transmitted and discussed with other related hospitals. Data transmission from relational hospitals takes 1 to 2 minutes per 500 Kbyte of data. More distant hospitals (e.g., Rakusai Hospital, Kyoto) takes 1 minute more. The mean number of accesses our image database in a recent 3-month period was 470. There is a total about 200 cases in our image database, acquired over the past 2 years. Our system is useful for communication and image treatment between hospitals and we will describe the elements of our system and image database.
Mass-storage management for distributed image/video archives
NASA Astrophysics Data System (ADS)
Franchi, Santina; Guarda, Roberto; Prampolini, Franco
1993-04-01
The realization of image/video database requires a specific design for both database structures and mass storage management. This issue has addressed the project of the digital image/video database system that has been designed at IBM SEMEA Scientific & Technical Solution Center. Proper database structures have been defined to catalog image/video coding technique with the related parameters, and the description of image/video contents. User workstations and servers are distributed along a local area network. Image/video files are not managed directly by the DBMS server. Because of their wide size, they are stored outside the database on network devices. The database contains the pointers to the image/video files and the description of the storage devices. The system can use different kinds of storage media, organized in a hierarchical structure. Three levels of functions are available to manage the storage resources. The functions of the lower level provide media management. They allow it to catalog devices and to modify device status and device network location. The medium level manages image/video files on a physical basis. It manages file migration between high capacity media and low access time media. The functions of the upper level work on image/video file on a logical basis, as they archive, move and copy image/video data selected by user defined queries. These functions are used to support the implementation of a storage management strategy. The database information about characteristics of both storage devices and coding techniques are used by the third level functions to fit delivery/visualization requirements and to reduce archiving costs.
Yanagita, Satoshi; Imahana, Masato; Suwa, Kazuaki; Sugimura, Hitomi; Nishiki, Masayuki
2016-01-01
Japanese Society of Radiological Technology (JSRT) standard digital image database contains many useful cases of chest X-ray images, and has been used in many state-of-the-art researches. However, the pixel values of all the images are simply digitized as relative density values by utilizing a scanned film digitizer. As a result, the pixel values are completely different from the standardized display system input value of digital imaging and communications in medicine (DICOM), called presentation value (P-value), which can maintain a visual consistency when observing images using different display luminance. Therefore, we converted all the images from JSRT standard digital image database to DICOM format followed by the conversion of the pixel values to P-value using an original program developed by ourselves. Consequently, JSRT standard digital image database has been modified so that the visual consistency of images is maintained among different luminance displays.
Multi-Sensor Scene Synthesis and Analysis
1981-09-01
Quad Trees for Image Representation and Processing ...... ... 126 2.6.2 Databases ..... ..... ... ..... ... ..... ..... 138 2.6.2.1 Definitions and...Basic Concepts ....... 138 2.6.3 Use of Databases in Hierarchical Scene Analysis ...... ... ..................... 147 2.6.4 Use of Relational Tables...Multisensor Image Database Systems (MIDAS) . 161 2.7.2 Relational Database System for Pictures .... ..... 168 2.7.3 Relational Pictorial Database
ERIC Educational Resources Information Center
Harzbecker, Joseph, Jr.
1993-01-01
Describes the National Institute of Health's GenBank DNA sequence database and how it can be accessed through the Internet. A real reference question, which was answered successfully using the database, is reproduced to illustrate and elaborate on the potential of the Internet for information retrieval. (10 references) (KRN)
ERIC Educational Resources Information Center
Kurhan, Scott H.; Griffing, Elizabeth A.
2011-01-01
Reference services in public libraries are changing dramatically. The Internet, online databases, and shrinking budgets are all making it necessary for non-traditional reference staff to become familiar with online reference tools. Recognizing the need for cross-training, Chesapeake Public Library (CPL) developed a program called the Database…
BODY IMAGE IN CHILDHOOD: AN INTEGRATIVE LITERATURE REVIEW.
Neves, Clara Mockdece; Cipriani, Flávia Marcelle; Meireles, Juliana Fernandes Filgueiras; Morgado, Fabiane Frota da Rocha; Ferreira, Maria Elisa Caputo
2017-01-01
To analyse the scientific literature regarding the evaluation of body image in children through an integrative literature review. An intersection of the keywords "body image" AND "child" was conducted in Scopus, Medline and Virtual Health Library (BVS - Biblioteca Virtual de Saúde) databases. The electronic search was based on studies published from January 2013 to January 2016, in order to verify the most current investigations on the subject. Exclusion criteria were: articles in duplicate; no available summaries; not empirical; not assessing any component of body image; the sample did not consider the target age of this research (0 to 12 years old) and/or considered clinical populations; besides articles not fully available. 7,681 references were identified, and, after the exclusion criteria were implemented, 33 studies were analysed. Results showed that the perceptual and attitudinal dimensions focusing on body dissatisfaction were explored, mainly evaluated by silhouette scales. Intervention programs were developed internationally to prevent negative body image in children. The studies included in this review evaluated specific aspects of body image in children, especially body perception and body dissatisfaction. The creation of specific tools for children to evaluate body image is recommended to promote the psychosocial well being of individuals throughout human development.
Image re-sampling detection through a novel interpolation kernel.
Hilal, Alaa
2018-06-01
Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.
Structural scene analysis and content-based image retrieval applied to bone age assessment
NASA Astrophysics Data System (ADS)
Fischer, Benedikt; Brosig, André; Deserno, Thomas M.; Ott, Bastian; Günther, Rolf W.
2009-02-01
Radiological bone age assessment is based on global or local image regions of interest (ROI), such as epiphyseal regions or the area of carpal bones. Usually, these regions are compared to a standardized reference and a score determining the skeletal maturity is calculated. For computer-assisted diagnosis, automatic ROI extraction is done so far by heuristic approaches. In this work, we apply a high-level approach of scene analysis for knowledge-based ROI segmentation. Based on a set of 100 reference images from the IRMA database, a so called structural prototype (SP) is trained. In this graph-based structure, the 14 phalanges and 5 metacarpal bones are represented by nodes, with associated location, shape, as well as texture parameters modeled by Gaussians. Accordingly, the Gaussians describing the relative positions, relative orientation, and other relative parameters between two nodes are associated to the edges. Thereafter, segmentation of a hand radiograph is done in several steps: (i) a multi-scale region merging scheme is applied to extract visually prominent regions; (ii) a graph/sub-graph matching to the SP robustly identifies a subset of the 19 bones; (iii) the SP is registered to the current image for complete scene-reconstruction (iv) the epiphyseal regions are extracted from the reconstructed scene. The evaluation is based on 137 images of Caucasian males from the USC hand atlas. Overall, an error rate of 32% is achieved, for the 6 middle distal and medial/distal epiphyses, 23% of all extractions need adjustments. On average 9.58 of the 14 epiphyseal regions were extracted successfully per image. This is promising for further use in content-based image retrieval (CBIR) and CBIR-based automatic bone age assessment.
Application of furniture images selection based on neural network
NASA Astrophysics Data System (ADS)
Wang, Yong; Gao, Wenwen; Wang, Ying
2018-05-01
In the construction of 2 million furniture image databases, aiming at the problem of low quality of database, a combination of CNN and Metric learning algorithm is proposed, which makes it possible to quickly and accurately remove duplicate and irrelevant samples in the furniture image database. Solve problems that images screening method is complex, the accuracy is not high, time-consuming is long. Deep learning algorithm achieve excellent image matching ability in actual furniture retrieval applications after improving data quality.
Image query and indexing for digital x rays
NASA Astrophysics Data System (ADS)
Long, L. Rodney; Thoma, George R.
1998-12-01
The web-based medical information retrieval system (WebMIRS) allows interned access to databases containing 17,000 digitized x-ray spine images and associated text data from National Health and Nutrition Examination Surveys (NHANES). WebMIRS allows SQL query of the text, and viewing of the returned text records and images using a standard browser. We are now working (1) to determine utility of data directly derived from the images in our databases, and (2) to investigate the feasibility of computer-assisted or automated indexing of the images to support image retrieval of images of interest to biomedical researchers in the field of osteoarthritis. To build an initial database based on image data, we are manually segmenting a subset of the vertebrae, using techniques from vertebral morphometry. From this, we will derive and add to the database vertebral features. This image-derived data will enhance the user's data access capability by enabling the creation of combined SQL/image-content queries.
Computer-aided diagnosis of malignant mammograms using Zernike moments and SVM.
Sharma, Shubhi; Khanna, Pritee
2015-02-01
This work is directed toward the development of a computer-aided diagnosis (CAD) system to detect abnormalities or suspicious areas in digital mammograms and classify them as malignant or nonmalignant. Original mammogram is preprocessed to separate the breast region from its background. To work on the suspicious area of the breast, region of interest (ROI) patches of a fixed size of 128×128 are extracted from the original large-sized digital mammograms. For training, patches are extracted manually from a preprocessed mammogram. For testing, patches are extracted from a highly dense area identified by clustering technique. For all extracted patches corresponding to a mammogram, Zernike moments of different orders are computed and stored as a feature vector. A support vector machine (SVM) is used to classify extracted ROI patches. The experimental study shows that the use of Zernike moments with order 20 and SVM classifier gives better results among other studies. The proposed system is tested on Image Retrieval In Medical Application (IRMA) reference dataset and Digital Database for Screening Mammography (DDSM) mammogram database. On IRMA reference dataset, it attains 99% sensitivity and 99% specificity, and on DDSM mammogram database, it obtained 97% sensitivity and 96% specificity. To verify the applicability of Zernike moments as a fitting texture descriptor, the performance of the proposed CAD system is compared with the other well-known texture descriptors namely gray-level co-occurrence matrix (GLCM) and discrete cosine transform (DCT).
NASA Astrophysics Data System (ADS)
Konstantinov, Pavel; Varentsov, Mikhail; Platonov, Vladimir; Samsonov, Timofey; Zhdanova, Ekaterina; Chubarova, Natalia
2017-04-01
The main goal of this investigation is to develop a kind of "urban reanalysis" - the database of meteorological and radiation fields under Moscow megalopolis for period 1981-2014 with high spatial resolution. Main meteorological fields for Moscow region are reproduced with COSMO_CLM regional model (including urban parameters) with horizontal resolution 1x1 km. Time resolution of output fields is 1 hour. For radiation fields is quite useful to calculate SVF (Sky View Factor) for obtaining losses of UV radiation in complex urban conditions. Usually, the raster-based SVF analysis the shadow-casting algorithm proposed by Richens (1997) is popular (see Ratti and Richens 2004, Gal et al. 2008, for example). SVF image is obtained by combining shadow images obtained from different directions. An alternative is to use raster-based SVF calculation similar to vector approach using digital elevation model of urban relief. Output radiation field includes UV-radiation with horizontal resolution 1x1 km This study was financially supported by the Russian Foundation for Basic Research within the framework of the scientific project no. 15-35-21129 _mol_a_ved and project no 15-35-70006 mol_a_mos References: 1. Gal, T., Lindberg, F., and Unger, J., 2008. Computing continuous sky view factors using 3D urban raster and vector databases: comparison and application to urban climate. Theoretical and applied climatology, 95 (1-2), 111-123. 2. Richens, P., 1997. Image processing for urban scale environmental modelling. In: J.D. Spitler and J.L.M. Hensen, eds. th Intemational IBPSA Conference Building Simulation, Prague. 3. Ratti, C. and Richens, P., 2004. Raster analysis of urban form. Environment and Planning B: Planning and Design, 31 (2), 297-309.
Planning for CD-ROM in the Reference Department.
ERIC Educational Resources Information Center
Graves, Gail T.; And Others
1987-01-01
Outlines the evaluation criteria used by the reference department at the Williams Library at the University of Mississippi in selecting databases and hardware used in CD-ROM workstations. The factors discussed include database coverage, costs, and security. (CLB)
MAGA, a new database of gas natural emissions: a collaborative web environment for collecting data.
NASA Astrophysics Data System (ADS)
Cardellini, Carlo; Chiodini, Giovanni; Frigeri, Alessandro; Bagnato, Emanuela; Frondini, Francesco; Aiuppa, Alessandro
2014-05-01
The data on volcanic and non-volcanic gas emissions available online are, as today, are incomplete and most importantly, fragmentary. Hence, there is need for common frameworks to aggregate available data, in order to characterize and quantify the phenomena at various scales. A new and detailed web database (MAGA: MApping GAs emissions) has been developed, and recently improved, to collect data on carbon degassing form volcanic and non-volcanic environments. MAGA database allows researchers to insert data interactively and dynamically into a spatially referred relational database management system, as well as to extract data. MAGA kicked-off with the database set up and with the ingestion in to the database of the data from: i) a literature survey on publications on volcanic gas fluxes including data on active craters degassing, diffuse soil degassing and fumaroles both from dormant closed-conduit volcanoes (e.g., Vulcano, Phlegrean Fields, Santorini, Nysiros, Teide, etc.) and open-vent volcanoes (e.g., Etna, Stromboli, etc.) in the Mediterranean area and Azores, and ii) the revision and update of Googas database on non-volcanic emission of the Italian territory (Chiodini et al., 2008), in the framework of the Deep Earth Carbon Degassing (DECADE) research initiative of the Deep Carbon Observatory (DCO). For each geo-located gas emission site, the database holds images and description of the site and of the emission type (e.g., diffuse emission, plume, fumarole, etc.), gas chemical-isotopic composition (when available), gas temperature and gases fluxes magnitude. Gas sampling, analysis and flux measurement methods are also reported together with references and contacts to researchers expert of each site. In this phase data can be accessed on the network from a web interface, and data-driven web service, where software clients can request data directly from the database, are planned to be implemented shortly. This way Geographical Information Systems (GIS) and Virtual Globes (e.g., Google Earth) could easily access the database, and data could be exchanged with other database. At the moment the database includes: i) more than 1000 flux data about volcanic plume degassing from Etna and Stromboli volcanoes, ii) data from ~ 30 sites of diffuse soil degassing from Napoletan volcanoes, Azores, Canary, Etna, Stromboli, and Vulcano Island, several data on fumarolic emissions (~ 7 sites) with CO2 fluxes; iii) data from ~ 270 non volcanic gas emission site in Italy. We believe MAGA data-base is an important starting point to develop a large scale, expandable data-base aimed to excite, inspire, and encourage participation among researchers. In addition, the possibility to archive location and qualitative information for gas emission/sites not yet investigated, could stimulate the scientific community for future researches and will provide an indication on the current uncertainty on deep carbon fluxes global estimates
TRENDS: A flight test relational database user's guide and reference manual
NASA Technical Reports Server (NTRS)
Bondi, M. J.; Bjorkman, W. S.; Cross, J. L.
1994-01-01
This report is designed to be a user's guide and reference manual for users intending to access rotocraft test data via TRENDS, the relational database system which was developed as a tool for the aeronautical engineer with no programming background. This report has been written to assist novice and experienced TRENDS users. TRENDS is a complete system for retrieving, searching, and analyzing both numerical and narrative data, and for displaying time history and statistical data in graphical and numerical formats. This manual provides a 'guided tour' and a 'user's guide' for the new and intermediate-skilled users. Examples for the use of each menu item within TRENDS is provided in the Menu Reference section of the manual, including full coverage for TIMEHIST, one of the key tools. This manual is written around the XV-15 Tilt Rotor database, but does include an appendix on the UH-60 Blackhawk database. This user's guide and reference manual establishes a referrable source for the research community and augments NASA TM-101025, TRENDS: The Aeronautical Post-Test, Database Management System, Jan. 1990, written by the same authors.
Healthcare reform for imagers: finding a way forward now.
Douglas, Pamela S; Picard, Michael H
2013-03-01
The changing healthcare environment presents many challenges to cardiovascular imagers. This perspective paper uses current trends to propose strategies that cardiovascular imagers can follow to lead in managing change and developing the imaging laboratory of the future. In the area of quality, imagers are encouraged to follow guidelines and standards, implement structured reporting and laboratory databases, adopt ongoing quality improvement programs, and use benchmarks to confirm imaging quality. In the area of access, imagers are encouraged to enhance availability of testing, focus on patient and referring physician value and satisfaction, collaboratively implement new technologies and uses of imaging, integrate health information technology in the laboratory, and work toward the appropriate inclusion of imaging in new healthcare delivery models. In the area of cost, imagers are encouraged to minimize laboratory operating expenses without compromising quality, and to take an active role in care redesign initiatives to ensure that imaging is utilized appropriately and at proper time intervals. Imagers are also encouraged to learn leadership and management skills, undertake strategic planning exercises, and build strong, collaborative teams. Although it is difficult to predict the future of cardiovascular imaging delivery, a reasonable sense of the likely direction of many changes and careful attention to the fundamentals of good health care (quality, access, and cost) can help imagers to thrive now and in the future. Copyright © 2013 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Yassin, Nisreen I R; Omran, Shaimaa; El Houby, Enas M F; Allam, Hemat
2018-03-01
The high incidence of breast cancer in women has increased significantly in the recent years. Physician experience of diagnosing and detecting breast cancer can be assisted by using some computerized features extraction and classification algorithms. This paper presents the conduction and results of a systematic review (SR) that aims to investigate the state of the art regarding the computer aided diagnosis/detection (CAD) systems for breast cancer. The SR was conducted using a comprehensive selection of scientific databases as reference sources, allowing access to diverse publications in the field. The scientific databases used are Springer Link (SL), Science Direct (SD), IEEE Xplore Digital Library, and PubMed. Inclusion and exclusion criteria were defined and applied to each retrieved work to select those of interest. From 320 studies retrieved, 154 studies were included. However, the scope of this research is limited to scientific and academic works and excludes commercial interests. This survey provides a general analysis of the current status of CAD systems according to the used image modalities and the machine learning based classifiers. Potential research studies have been discussed to create a more objective and efficient CAD systems. Copyright © 2017 Elsevier B.V. All rights reserved.
Construction of image database for newspapaer articles using CTS
NASA Astrophysics Data System (ADS)
Kamio, Tatsuo
Nihon Keizai Shimbun, Inc. developed a system of making articles' image database automatically by use of CTS (Computer Typesetting System). Besides the articles and the headlines inputted in CTS, it reproduces the image of elements of such as photography and graphs by article in accordance with information of position on the paper. So to speak, computer itself clips the articles out of the newspaper. Image database is accumulated in magnetic file and optical file and is output to the facsimile of users. With diffusion of CTS, newspaper companies which start to have structure of articles database are increased rapidly, the said system is the first attempt to make database automatically. This paper describes the device of CTS which supports this system and outline.
ReprDB and panDB: minimalist databases with maximal microbial representation.
Zhou, Wei; Gay, Nicole; Oh, Julia
2018-01-18
Profiling of shotgun metagenomic samples is hindered by a lack of unified microbial reference genome databases that (i) assemble genomic information from all open access microbial genomes, (ii) have relatively small sizes, and (iii) are compatible to various metagenomic read mapping tools. Moreover, computational tools to rapidly compile and update such databases to accommodate the rapid increase in new reference genomes do not exist. As a result, database-guided analyses often fail to profile a substantial fraction of metagenomic shotgun sequencing reads from complex microbiomes. We report pipelines that efficiently traverse all open access microbial genomes and assemble non-redundant genomic information. The pipelines result in two species-resolution microbial reference databases of relatively small sizes: reprDB, which assembles microbial representative or reference genomes, and panDB, for which we developed a novel iterative alignment algorithm to identify and assemble non-redundant genomic regions in multiple sequenced strains. With the databases, we managed to assign taxonomic labels and genome positions to the majority of metagenomic reads from human skin and gut microbiomes, demonstrating a significant improvement over a previous database-guided analysis on the same datasets. reprDB and panDB leverage the rapid increases in the number of open access microbial genomes to more fully profile metagenomic samples. Additionally, the databases exclude redundant sequence information to avoid inflated storage or memory space and indexing or analyzing time. Finally, the novel iterative alignment algorithm significantly increases efficiency in pan-genome identification and can be useful in comparative genomic analyses.
The UBIRIS.v2: a database of visible wavelength iris images captured on-the-move and at-a-distance.
Proença, Hugo; Filipe, Sílvio; Santos, Ricardo; Oliveira, João; Alexandre, Luís A
2010-08-01
The iris is regarded as one of the most useful traits for biometric recognition and the dissemination of nationwide iris-based recognition systems is imminent. However, currently deployed systems rely on heavy imaging constraints to capture near infrared images with enough quality. Also, all of the publicly available iris image databases contain data correspondent to such imaging constraints and therefore are exclusively suitable to evaluate methods thought to operate on these type of environments. The main purpose of this paper is to announce the availability of the UBIRIS.v2 database, a multisession iris images database which singularly contains data captured in the visible wavelength, at-a-distance (between four and eight meters) and on on-the-move. This database is freely available for researchers concerned about visible wavelength iris recognition and will be useful in accessing the feasibility and specifying the constraints of this type of biometric recognition.
The land management and operations database (LMOD)
USDA-ARS?s Scientific Manuscript database
This paper presents the design, implementation, deployment, and application of the Land Management and Operations Database (LMOD). LMOD is the single authoritative source for reference land management and operation reference data within the USDA enterprise data warehouse. LMOD supports modeling appl...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rupcich, Franco; Badal, Andreu; Kyprianou, Iacovos
Purpose: The purpose of this study was to develop a database for estimating organ dose in a voxelized patient model for coronary angiography and brain perfusion CT acquisitions with any spectra and angular tube current modulation setting. The database enables organ dose estimation for existing and novel acquisition techniques without requiring Monte Carlo simulations. Methods: The study simulated transport of monoenergetic photons between 5 and 150 keV for 1000 projections over 360 Degree-Sign through anthropomorphic voxelized female chest and head (0 Degree-Sign and 30 Degree-Sign tilt) phantoms and standard head and body CTDI dosimetry cylinders. The simulations resulted in tablesmore » of normalized dose deposition for several radiosensitive organs quantifying the organ dose per emitted photon for each incident photon energy and projection angle for coronary angiography and brain perfusion acquisitions. The values in a table can be multiplied by an incident spectrum and number of photons at each projection angle and then summed across all energies and angles to estimate total organ dose. Scanner-specific organ dose may be approximated by normalizing the database-estimated organ dose by the database-estimated CTDI{sub vol} and multiplying by a physical CTDI{sub vol} measurement. Two examples are provided demonstrating how to use the tables to estimate relative organ dose. In the first, the change in breast and lung dose during coronary angiography CT scans is calculated for reduced kVp, angular tube current modulation, and partial angle scanning protocols relative to a reference protocol. In the second example, the change in dose to the eye lens is calculated for a brain perfusion CT acquisition in which the gantry is tilted 30 Degree-Sign relative to a nontilted scan. Results: Our database provides tables of normalized dose deposition for several radiosensitive organs irradiated during coronary angiography and brain perfusion CT scans. Validation results indicate total organ doses calculated using our database are within 1% of those calculated using Monte Carlo simulations with the same geometry and scan parameters for all organs except red bone marrow (within 6%), and within 23% of published estimates for different voxelized phantoms. Results from the example of using the database to estimate organ dose for coronary angiography CT acquisitions show 2.1%, 1.1%, and -32% change in breast dose and 2.1%, -0.74%, and 4.7% change in lung dose for reduced kVp, tube current modulated, and partial angle protocols, respectively, relative to the reference protocol. Results show -19.2% difference in dose to eye lens for a tilted scan relative to a nontilted scan. The reported relative changes in organ doses are presented without quantification of image quality and are for the sole purpose of demonstrating the use of the proposed database. Conclusions: The proposed database and calculation method enable the estimation of organ dose for coronary angiography and brain perfusion CT scans utilizing any spectral shape and angular tube current modulation scheme by taking advantage of the precalculated Monte Carlo simulation results. The database can be used in conjunction with image quality studies to develop optimized acquisition techniques and may be particularly beneficial for optimizing dual kVp acquisitions for which numerous kV, mA, and filtration combinations may be investigated.« less
Fire-induced water-repellent soils, an annotated bibliography
Kalendovsky, M.A.; Cannon, S.H.
1997-01-01
The development and nature of water-repellent, or hydrophobic, soils are important issues in evaluating hillslope response to fire. The following annotated bibliography was compiled to consolidate existing published research on the topic. Emphasis was placed on the types, causes, effects and measurement techniques of water repellency, particularly with respect to wildfires and prescribed burns. Each annotation includes a general summary of the respective publication, as well as highlights of interest to this focus. Although some references on the development of water repellency without fires, the chemistry of hydrophobic substances, and remediation of water-repellent conditions are included, coverage of these topics is not intended to be comprehensive. To develop this database, the GeoRef, Agricola, and Water Resources Abstracts databases were searched for appropriate references, and the bibliographies of each reference were then reviewed for additional entries. Additional references will be added to this bibliography as they become available. The annotated bibliography can be accessed on the Web at http://geohazards.cr.usgs.gov/html_files/landslides/ofr97-720/biblio.html. A database consisting of the references and keywords is available through a link at the above address. This database was compiled using EndNote2 plus software by Niles and Associates, and is necessary to search the database.
Recently active traces of the Bartlett Springs Fault, California: a digital database
Lienkaemper, James J.
2010-01-01
The purpose of this map is to show the location of and evidence for recent movement on active fault traces within the Bartlett Springs Fault Zone, California. The location and recency of the mapped traces is primarily based on geomorphic expression of the fault as interpreted from large-scale aerial photography. In a few places, evidence of fault creep and offset Holocene strata in trenches and natural exposures have confirmed the activity of some of these traces. This publication is formatted both as a digital database for use within a geographic information system (GIS) and for broader public access as map images that may be browsed on-line or download a summary map. The report text describes the types of scientific observations used to make the map, gives references pertaining to the fault and the evidence of faulting, and provides guidance for use of and limitations of the map.
Database Integrity Monitoring for Synthetic Vision Systems Using Machine Vision and SHADE
NASA Technical Reports Server (NTRS)
Cooper, Eric G.; Young, Steven D.
2005-01-01
In an effort to increase situational awareness, the aviation industry is investigating technologies that allow pilots to visualize what is outside of the aircraft during periods of low-visibility. One of these technologies, referred to as Synthetic Vision Systems (SVS), provides the pilot with real-time computer-generated images of obstacles, terrain features, runways, and other aircraft regardless of weather conditions. To help ensure the integrity of such systems, methods of verifying the accuracy of synthetically-derived display elements using onboard remote sensing technologies are under investigation. One such method is based on a shadow detection and extraction (SHADE) algorithm that transforms computer-generated digital elevation data into a reference domain that enables direct comparison with radar measurements. This paper describes machine vision techniques for making this comparison and discusses preliminary results from application to actual flight data.
Convolutional Neural Network-Based Finger-Vein Recognition Using NIR Image Sensors
Hong, Hyung Gil; Lee, Min Beom; Park, Kang Ryoung
2017-01-01
Conventional finger-vein recognition systems perform recognition based on the finger-vein lines extracted from the input images or image enhancement, and texture feature extraction from the finger-vein images. In these cases, however, the inaccurate detection of finger-vein lines lowers the recognition accuracy. In the case of texture feature extraction, the developer must experimentally decide on a form of the optimal filter for extraction considering the characteristics of the image database. To address this problem, this research proposes a finger-vein recognition method that is robust to various database types and environmental changes based on the convolutional neural network (CNN). In the experiments using the two finger-vein databases constructed in this research and the SDUMLA-HMT finger-vein database, which is an open database, the method proposed in this research showed a better performance compared to the conventional methods. PMID:28587269
Convolutional Neural Network-Based Finger-Vein Recognition Using NIR Image Sensors.
Hong, Hyung Gil; Lee, Min Beom; Park, Kang Ryoung
2017-06-06
Conventional finger-vein recognition systems perform recognition based on the finger-vein lines extracted from the input images or image enhancement, and texture feature extraction from the finger-vein images. In these cases, however, the inaccurate detection of finger-vein lines lowers the recognition accuracy. In the case of texture feature extraction, the developer must experimentally decide on a form of the optimal filter for extraction considering the characteristics of the image database. To address this problem, this research proposes a finger-vein recognition method that is robust to various database types and environmental changes based on the convolutional neural network (CNN). In the experiments using the two finger-vein databases constructed in this research and the SDUMLA-HMT finger-vein database, which is an open database, the method proposed in this research showed a better performance compared to the conventional methods.
NASA Astrophysics Data System (ADS)
Burgos, Ninon; Guerreiro, Filipa; McClelland, Jamie; Presles, Benoît; Modat, Marc; Nill, Simeon; Dearnaley, David; deSouza, Nandita; Oelfke, Uwe; Knopf, Antje-Christin; Ourselin, Sébastien; Cardoso, M. Jorge
2017-06-01
To tackle the problem of magnetic resonance imaging (MRI)-only radiotherapy treatment planning (RTP), we propose a multi-atlas information propagation scheme that jointly segments organs and generates pseudo x-ray computed tomography (CT) data from structural MR images (T1-weighted and T2-weighted). As the performance of the method strongly depends on the quality of the atlas database composed of multiple sets of aligned MR, CT and segmented images, we also propose a robust way of registering atlas MR and CT images, which combines structure-guided registration, and CT and MR image synthesis. We first evaluated the proposed framework in terms of segmentation and CT synthesis accuracy on 15 subjects with prostate cancer. The segmentations obtained with the proposed method were compared using the Dice score coefficient (DSC) to the manual segmentations. Mean DSCs of 0.73, 0.90, 0.77 and 0.90 were obtained for the prostate, bladder, rectum and femur heads, respectively. The mean absolute error (MAE) and the mean error (ME) were computed between the reference CTs (non-rigidly aligned to the MRs) and the pseudo CTs generated with the proposed method. The MAE was on average 45.7+/- 4.6 HU and the ME -1.6+/- 7.7 HU. We then performed a dosimetric evaluation by re-calculating plans on the pseudo CTs and comparing them to the plans optimised on the reference CTs. We compared the cumulative dose volume histograms (DVH) obtained for the pseudo CTs to the DVH obtained for the reference CTs in the planning target volume (PTV) located in the prostate, and in the organs at risk at different DVH points. We obtained average differences of -0.14 % in the PTV for {{D}98 % } , and between -0.14 % and 0.05% in the PTV, bladder, rectum and femur heads for D mean and {{D}2 % } . Overall, we demonstrate that the proposed framework is able to automatically generate accurate pseudo CT images and segmentations in the pelvic region, potentially bypassing the need for CT scan for accurate RTP.
A new catalog of planetary maps
NASA Technical Reports Server (NTRS)
Batson, R. M.; Inge, J. L.
1991-01-01
A single, concise reference to all existing planetary maps, including lunar ones, is being prepared that will allow map users to identify and locate maps of their areas of interest. This will be the first such comprehensive listing of planetary maps. Although the USGS shows index maps on the collar of each map sheet, periodically publishes index maps of Mars, and provides informal listings of the USGS map database, no tabulation exists that identifies all planetary maps, including those published by DMA and other organizations. The catalog will consist of a booklet containing small-scale image maps with superimposed quadrangle boundaries and map data tabulations.
2016-01-01
The widespread use of ultrasonography places it in a key position for use in the risk stratification of thyroid nodules. The French proposal is a five-tier system, our version of a thyroid imaging reporting and database system (TI-RADS), which includes a standardized vocabulary and report and a quantified risk assessment. It allows the selection of the nodules that should be referred for fine-needle aspiration biopsies. Effort should be directed towards merging the different risk stratification systems utilized around the world and testing this unified system with multi-center studies. PMID:26324117
A database for spectral image quality
NASA Astrophysics Data System (ADS)
Le Moan, Steven; George, Sony; Pedersen, Marius; Blahová, Jana; Hardeberg, Jon Yngve
2015-01-01
We introduce a new image database dedicated to multi-/hyperspectral image quality assessment. A total of nine scenes representing pseudo-at surfaces of different materials (textile, wood, skin. . . ) were captured by means of a 160 band hyperspectral system with a spectral range between 410 and 1000nm. Five spectral distortions were designed, applied to the spectral images and subsequently compared in a psychometric experiment, in order to provide a basis for applications such as the evaluation of spectral image difference measures. The database can be downloaded freely from http://www.colourlab.no/cid.
Document image database indexing with pictorial dictionary
NASA Astrophysics Data System (ADS)
Akbari, Mohammad; Azimi, Reza
2010-02-01
In this paper we introduce a new approach for information retrieval from Persian document image database without using Optical Character Recognition (OCR).At first an attribute called subword upper contour label is defined then, a pictorial dictionary is constructed based on this attribute for the subwords. By this approach we address two issues in document image retrieval: keyword spotting and retrieval according to the document similarities. The proposed methods have been evaluated on a Persian document image database. The results have proved the ability of this approach in document image information retrieval.
Code of Federal Regulations, 2011 CFR
2011-01-01
... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE (Eff. Jan. 10, 2011) Background and Definitions... Product Safety Information Database. (2) Commission or CPSC means the Consumer Product Safety Commission... Information Database, also referred to as the Database, means the database on the safety of consumer products...
Tagare, Hemant D.; Jaffe, C. Carl; Duncan, James
1997-01-01
Abstract Information contained in medical images differs considerably from that residing in alphanumeric format. The difference can be attributed to four characteristics: (1) the semantics of medical knowledge extractable from images is imprecise; (2) image information contains form and spatial data, which are not expressible in conventional language; (3) a large part of image information is geometric; (4) diagnostic inferences derived from images rest on an incomplete, continuously evolving model of normality. This paper explores the differentiating characteristics of text versus images and their impact on design of a medical image database intended to allow content-based indexing and retrieval. One strategy for implementing medical image databases is presented, which employs object-oriented iconic queries, semantics by association with prototypes, and a generic schema. PMID:9147338
Han, Guanghui; Liu, Xiabi; Han, Feifei; Santika, I Nyoman Tenaya; Zhao, Yanfeng; Zhao, Xinming; Zhou, Chunwu
2015-02-01
Lung computed tomography (CT) imaging signs play important roles in the diagnosis of lung diseases. In this paper, we review the significance of CT imaging signs in disease diagnosis and determine the inclusion criterion of CT scans and CT imaging signs of our database. We develop the software of abnormal regions annotation and design the storage scheme of CT images and annotation data. Then, we present a publicly available database of lung CT imaging signs, called LISS for short, which contains 271 CT scans and 677 abnormal regions in them. The 677 abnormal regions are divided into nine categories of common CT imaging signs of lung disease (CISLs). The ground truth of these CISLs regions and the corresponding categories are provided. Furthermore, to make the database publicly available, all private data in CT scans are eliminated or replaced with provisioned values. The main characteristic of our LISS database is that it is developed from a new perspective of CT imaging signs of lung diseases instead of commonly considered lung nodules. Thus, it is promising to apply to computer-aided detection and diagnosis research and medical education.
Learning Receptive Fields and Quality Lookups for Blind Quality Assessment of Stereoscopic Images.
Shao, Feng; Lin, Weisi; Wang, Shanshan; Jiang, Gangyi; Yu, Mei; Dai, Qionghai
2016-03-01
Blind quality assessment of 3D images encounters more new challenges than its 2D counterparts. In this paper, we propose a blind quality assessment for stereoscopic images by learning the characteristics of receptive fields (RFs) from perspective of dictionary learning, and constructing quality lookups to replace human opinion scores without performance loss. The important feature of the proposed method is that we do not need a large set of samples of distorted stereoscopic images and the corresponding human opinion scores to learn a regression model. To be more specific, in the training phase, we learn local RFs (LRFs) and global RFs (GRFs) from the reference and distorted stereoscopic images, respectively, and construct their corresponding local quality lookups (LQLs) and global quality lookups (GQLs). In the testing phase, blind quality pooling can be easily achieved by searching optimal GRF and LRF indexes from the learnt LQLs and GQLs, and the quality score is obtained by combining the LRF and GRF indexes together. Experimental results on three publicly 3D image quality assessment databases demonstrate that in comparison with the existing methods, the devised algorithm achieves high consistent alignment with subjective assessment.
A reference estimator based on composite sensor pattern noise for source device identification
NASA Astrophysics Data System (ADS)
Li, Ruizhe; Li, Chang-Tsun; Guan, Yu
2014-02-01
It has been proved that Sensor Pattern Noise (SPN) can serve as an imaging device fingerprint for source camera identification. Reference SPN estimation is a very important procedure within the framework of this application. Most previous works built reference SPN by averaging the SPNs extracted from 50 images of blue sky. However, this method can be problematic. Firstly, in practice we may face the problem of source camera identification in the absence of the imaging cameras and reference SPNs, which means only natural images with scene details are available for reference SPN estimation rather than blue sky images. It is challenging because the reference SPN can be severely contaminated by image content. Secondly, the number of available reference images sometimes is too few for existing methods to estimate a reliable reference SPN. In fact, existing methods lack consideration of the number of available reference images as they were designed for the datasets with abundant images to estimate the reference SPN. In order to deal with the aforementioned problem, in this work, a novel reference estimator is proposed. Experimental results show that our proposed method achieves better performance than the methods based on the averaged reference SPN, especially when few reference images used.
NASA Astrophysics Data System (ADS)
Rupcich, Franco John
The purpose of this study was to quantify the effectiveness of techniques intended to reduce dose to the breast during CT coronary angiography (CTCA) scans with respect to task-based image quality, and to evaluate the effectiveness of optimal energy weighting in improving contrast-to-noise ratio (CNR), and thus the potential for reducing breast dose, during energy-resolved dedicated breast CT. A database quantifying organ dose for several radiosensitive organs irradiated during CTCA, including the breast, was generated using Monte Carlo simulations. This database facilitates estimation of organ-specific dose deposited during CTCA protocols using arbitrary x-ray spectra or tube-current modulation schemes without the need to run Monte Carlo simulations. The database was used to estimate breast dose for simulated CT images acquired for a reference protocol and five protocols intended to reduce breast dose. For each protocol, the performance of two tasks (detection of signals with unknown locations) was compared over a range of breast dose levels using a task-based, signal-detectability metric: the estimator of the area under the exponential free-response relative operating characteristic curve, AFE. For large-diameter/medium-contrast signals, when maintaining equivalent AFE, the 80 kV partial, 80 kV, 120 kV partial, and 120 kV tube-current modulated protocols reduced breast dose by 85%, 81%, 18%, and 6%, respectively, while the shielded protocol increased breast dose by 68%. Results for the small-diameter/high-contrast signal followed similar trends, but with smaller magnitude of the percent changes in dose. The 80 kV protocols demonstrated the greatest reduction to breast dose, however, the subsequent increase in noise may be clinically unacceptable. Tube output for these protocols can be adjusted to achieve more desirable noise levels with lesser dose reduction. The improvement in CNR of optimally projection-based and image-based weighted images relative to photon-counting was investigated for six different energy bin combinations using a bench-top energy-resolving CT system with a cadmium zinc telluride (CZT) detector. The non-ideal spectral response reduced the CNR for the projection-based weighted images, while image-based weighting improved CNR for five out of the six investigated bin combinations, despite this non-ideal response, indicating potential for image-based weighting to reduce breast dose during dedicated breast CT.
A manifesto for cardiovascular imaging: addressing the human factor†
Fraser, Alan G
2017-01-01
Abstract Our use of modern cardiovascular imaging tools has not kept pace with their technological development. Diagnostic errors are common but seldom investigated systematically. Rather than more impressive pictures, our main goal should be more precise tests of function which we select because their appropriate use has therapeutic implications which in turn have a beneficial impact on morbidity or mortality. We should practise analytical thinking, use checklists to avoid diagnostic pitfalls, and apply strategies that will reduce biases and avoid overdiagnosis. We should develop normative databases, so that we can apply diagnostic algorithms that take account of variations with age and risk factors and that allow us to calculate pre-test probability and report the post-test probability of disease. We should report the imprecision of a test, or its confidence limits, so that reference change values can be considered in daily clinical practice. We should develop decision support tools to improve the quality and interpretation of diagnostic imaging, so that we choose the single best test irrespective of modality. New imaging tools should be evaluated rigorously, so that their diagnostic performance is established before they are widely disseminated; this should be a shared responsibility of manufacturers with clinicians, leading to cost-effective implementation. Trials should evaluate diagnostic strategies against independent reference criteria. We should exploit advances in machine learning to analyse digital data sets and identify those features that best predict prognosis or responses to treatment. Addressing these human factors will reap benefit for patients, while technological advances continue unpredictably. PMID:29029029
Open source database of images DEIMOS: extension for large-scale subjective image quality assessment
NASA Astrophysics Data System (ADS)
Vítek, Stanislav
2014-09-01
DEIMOS (Database of Images: Open Source) is an open-source database of images and video sequences for testing, verification and comparison of various image and/or video processing techniques such as compression, reconstruction and enhancement. This paper deals with extension of the database allowing performing large-scale web-based subjective image quality assessment. Extension implements both administrative and client interface. The proposed system is aimed mainly at mobile communication devices, taking into account advantages of HTML5 technology; it means that participants don't need to install any application and assessment could be performed using web browser. The assessment campaign administrator can select images from the large database and then apply rules defined by various test procedure recommendations. The standard test procedures may be fully customized and saved as a template. Alternatively the administrator can define a custom test, using images from the pool and other components, such as evaluating forms and ongoing questionnaires. Image sequence is delivered to the online client, e.g. smartphone or tablet, as a fully automated assessment sequence or viewer can decide on timing of the assessment if required. Environmental data and viewing conditions (e.g. illumination, vibrations, GPS coordinates, etc.), may be collected and subsequently analyzed.
NASA Technical Reports Server (NTRS)
Watson, Andrw B. (Inventor)
2010-01-01
The present invention relates to devices and methods for the measurement and/or for the specification of the perceptual intensity of a visual image. or the perceptual distance between a pair of images. Grayscale test and reference images are processed to produce test and reference luminance images. A luminance filter function is convolved with the reference luminance image to produce a local mean luminance reference image . Test and reference contrast images are produced from the local mean luminance reference image and the test and reference luminance images respectively, followed by application of a contrast sensitivity filter. The resulting images are combined according to mathematical prescriptions to produce a Just Noticeable Difference, JND value, indicative of a Spatial Standard Observer. SSO. Some embodiments include masking functions. window functions. special treatment for images lying on or near border and pre-processing of test images.
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
2012-01-01
The present invention relates to devices and methods for the measurement and/or for the specification of the perceptual intensity of a visual image, or the perceptual distance between a pair of images. Grayscale test and reference images are processed to produce test and reference luminance images. A luminance filter function is convolved with the reference luminance image to produce a local mean luminance reference image. Test and reference contrast images are produced from the local mean luminance reference image and the test and reference luminance images respectively, followed by application of a contrast sensitivity filter. The resulting images are combined according to mathematical prescriptions to produce a Just Noticeable Difference, JND value, indicative of a Spatial Standard Observer, SSO. Some embodiments include masking functions, window functions, special treatment for images lying on or near borders and pre-processing of test images.
Geometric registration of images by similarity transformation using two reference points
NASA Technical Reports Server (NTRS)
Kang, Yong Q. (Inventor); Jo, Young-Heon (Inventor); Yan, Xiao-Hai (Inventor)
2011-01-01
A method for registering a first image to a second image using a similarity transformation. The each image includes a plurality of pixels. The first image pixels are mapped to a set of first image coordinates and the second image pixels are mapped to a set of second image coordinates. The first image coordinates of two reference points in the first image are determined. The second image coordinates of these reference points in the second image are determined. A Cartesian translation of the set of second image coordinates is performed such that the second image coordinates of the first reference point match its first image coordinates. A similarity transformation of the translated set of second image coordinates is performed. This transformation scales and rotates the second image coordinates about the first reference point such that the second image coordinates of the second reference point match its first image coordinates.
Constructing Benchmark Databases and Protocols for Medical Image Analysis: Diabetic Retinopathy
Kauppi, Tomi; Kämäräinen, Joni-Kristian; Kalesnykiene, Valentina; Sorri, Iiris; Uusitalo, Hannu; Kälviäinen, Heikki
2013-01-01
We address the performance evaluation practices for developing medical image analysis methods, in particular, how to establish and share databases of medical images with verified ground truth and solid evaluation protocols. Such databases support the development of better algorithms, execution of profound method comparisons, and, consequently, technology transfer from research laboratories to clinical practice. For this purpose, we propose a framework consisting of reusable methods and tools for the laborious task of constructing a benchmark database. We provide a software tool for medical image annotation helping to collect class label, spatial span, and expert's confidence on lesions and a method to appropriately combine the manual segmentations from multiple experts. The tool and all necessary functionality for method evaluation are provided as public software packages. As a case study, we utilized the framework and tools to establish the DiaRetDB1 V2.1 database for benchmarking diabetic retinopathy detection algorithms. The database contains a set of retinal images, ground truth based on information from multiple experts, and a baseline algorithm for the detection of retinopathy lesions. PMID:23956787
Active browsing using similarity pyramids
NASA Astrophysics Data System (ADS)
Chen, Jau-Yuen; Bouman, Charles A.; Dalton, John C.
1998-12-01
In this paper, we describe a new approach to managing large image databases, which we call active browsing. Active browsing integrates relevance feedback into the browsing environment, so that users can modify the database's organization to suit the desired task. Our method is based on a similarity pyramid data structure, which hierarchically organizes the database, so that it can be efficiently browsed. At coarse levels, the similarity pyramid allows users to view the database as large clusters of similar images. Alternatively, users can 'zoom into' finer levels to view individual images. We discuss relevance feedback for the browsing process, and argue that it is fundamentally different from relevance feedback for more traditional search-by-query tasks. We propose two fundamental operations for active browsing: pruning and reorganization. Both of these operations depend on a user-defined relevance set, which represents the image or set of images desired by the user. We present statistical methods for accurately pruning the database, and we propose a new 'worm hole' distance metric for reorganizing the database, so that members of the relevance set are grouped together.
Content based image retrieval using local binary pattern operator and data mining techniques.
Vatamanu, Oana Astrid; Frandeş, Mirela; Lungeanu, Diana; Mihalaş, Gheorghe-Ioan
2015-01-01
Content based image retrieval (CBIR) concerns the retrieval of similar images from image databases, using feature vectors extracted from images. These feature vectors globally define the visual content present in an image, defined by e.g., texture, colour, shape, and spatial relations between vectors. Herein, we propose the definition of feature vectors using the Local Binary Pattern (LBP) operator. A study was performed in order to determine the optimum LBP variant for the general definition of image feature vectors. The chosen LBP variant is then subsequently used to build an ultrasound image database, and a database with images obtained from Wireless Capsule Endoscopy. The image indexing process is optimized using data clustering techniques for images belonging to the same class. Finally, the proposed indexing method is compared to the classical indexing technique, which is nowadays widely used.
Low dose CT image restoration using a database of image patches
NASA Astrophysics Data System (ADS)
Ha, Sungsoo; Mueller, Klaus
2015-01-01
Reducing the radiation dose in CT imaging has become an active research topic and many solutions have been proposed to remove the significant noise and streak artifacts in the reconstructed images. Most of these methods operate within the domain of the image that is subject to restoration. This, however, poses limitations on the extent of filtering possible. We advocate to take into consideration the vast body of external knowledge that exists in the domain of already acquired medical CT images, since after all, this is what radiologists do when they examine these low quality images. We can incorporate this knowledge by creating a database of prior scans, either of the same patient or a diverse corpus of different patients, to assist in the restoration process. Our paper follows up on our previous work that used a database of images. Using images, however, is challenging since it requires tedious and error prone registration and alignment. Our new method eliminates these problems by storing a diverse set of small image patches in conjunction with a localized similarity matching scheme. We also empirically show that it is sufficient to store these patches without anatomical tags since their statistics are sufficiently strong to yield good similarity matches from the database and as a direct effect, produce image restorations of high quality. A final experiment demonstrates that our global database approach can recover image features that are difficult to preserve with conventional denoising approaches.
ClearedLeavesDB: an online database of cleared plant leaf images
2014-01-01
Background Leaf vein networks are critical to both the structure and function of leaves. A growing body of recent work has linked leaf vein network structure to the physiology, ecology and evolution of land plants. In the process, multiple institutions and individual researchers have assembled collections of cleared leaf specimens in which vascular bundles (veins) are rendered visible. In an effort to facilitate analysis and digitally preserve these specimens, high-resolution images are usually created, either of entire leaves or of magnified leaf subsections. In a few cases, collections of digital images of cleared leaves are available for use online. However, these collections do not share a common platform nor is there a means to digitally archive cleared leaf images held by individual researchers (in addition to those held by institutions). Hence, there is a growing need for a digital archive that enables online viewing, sharing and disseminating of cleared leaf image collections held by both institutions and individual researchers. Description The Cleared Leaf Image Database (ClearedLeavesDB), is an online web-based resource for a community of researchers to contribute, access and share cleared leaf images. ClearedLeavesDB leverages resources of large-scale, curated collections while enabling the aggregation of small-scale collections within the same online platform. ClearedLeavesDB is built on Drupal, an open source content management platform. It allows plant biologists to store leaf images online with corresponding meta-data, share image collections with a user community and discuss images and collections via a common forum. We provide tools to upload processed images and results to the database via a web services client application that can be downloaded from the database. Conclusions We developed ClearedLeavesDB, a database focusing on cleared leaf images that combines interactions between users and data via an intuitive web interface. The web interface allows storage of large collections and integrates with leaf image analysis applications via an open application programming interface (API). The open API allows uploading of processed images and other trait data to the database, further enabling distribution and documentation of analyzed data within the community. The initial database is seeded with nearly 19,000 cleared leaf images representing over 40 GB of image data. Extensible storage and growth of the database is ensured by using the data storage resources of the iPlant Discovery Environment. ClearedLeavesDB can be accessed at http://clearedleavesdb.org. PMID:24678985
ClearedLeavesDB: an online database of cleared plant leaf images.
Das, Abhiram; Bucksch, Alexander; Price, Charles A; Weitz, Joshua S
2014-03-28
Leaf vein networks are critical to both the structure and function of leaves. A growing body of recent work has linked leaf vein network structure to the physiology, ecology and evolution of land plants. In the process, multiple institutions and individual researchers have assembled collections of cleared leaf specimens in which vascular bundles (veins) are rendered visible. In an effort to facilitate analysis and digitally preserve these specimens, high-resolution images are usually created, either of entire leaves or of magnified leaf subsections. In a few cases, collections of digital images of cleared leaves are available for use online. However, these collections do not share a common platform nor is there a means to digitally archive cleared leaf images held by individual researchers (in addition to those held by institutions). Hence, there is a growing need for a digital archive that enables online viewing, sharing and disseminating of cleared leaf image collections held by both institutions and individual researchers. The Cleared Leaf Image Database (ClearedLeavesDB), is an online web-based resource for a community of researchers to contribute, access and share cleared leaf images. ClearedLeavesDB leverages resources of large-scale, curated collections while enabling the aggregation of small-scale collections within the same online platform. ClearedLeavesDB is built on Drupal, an open source content management platform. It allows plant biologists to store leaf images online with corresponding meta-data, share image collections with a user community and discuss images and collections via a common forum. We provide tools to upload processed images and results to the database via a web services client application that can be downloaded from the database. We developed ClearedLeavesDB, a database focusing on cleared leaf images that combines interactions between users and data via an intuitive web interface. The web interface allows storage of large collections and integrates with leaf image analysis applications via an open application programming interface (API). The open API allows uploading of processed images and other trait data to the database, further enabling distribution and documentation of analyzed data within the community. The initial database is seeded with nearly 19,000 cleared leaf images representing over 40 GB of image data. Extensible storage and growth of the database is ensured by using the data storage resources of the iPlant Discovery Environment. ClearedLeavesDB can be accessed at http://clearedleavesdb.org.
Mulrane, Laoighse; Rexhepaj, Elton; Smart, Valerie; Callanan, John J; Orhan, Diclehan; Eldem, Türkan; Mally, Angela; Schroeder, Susanne; Meyer, Kirstin; Wendt, Maria; O'Shea, Donal; Gallagher, William M
2008-08-01
The widespread use of digital slides has only recently come to the fore with the development of high-throughput scanners and high performance viewing software. This development, along with the optimisation of compression standards and image transfer techniques, has allowed the technology to be used in wide reaching applications including integration of images into hospital information systems and histopathological training, as well as the development of automated image analysis algorithms for prediction of histological aberrations and quantification of immunohistochemical stains. Here, the use of this technology in the creation of a comprehensive library of images of preclinical toxicological relevance is demonstrated. The images, acquired using the Aperio ScanScope CS and XT slide acquisition systems, form part of the ongoing EU FP6 Integrated Project, Innovative Medicines for Europe (InnoMed). In more detail, PredTox (abbreviation for Predictive Toxicology) is a subproject of InnoMed and comprises a consortium of 15 industrial (13 large pharma, 1 technology provider and 1 SME) and three academic partners. The primary aim of this consortium is to assess the value of combining data generated from 'omics technologies (proteomics, transcriptomics, metabolomics) with the results from more conventional toxicology methods, to facilitate further informed decision making in preclinical safety evaluation. A library of 1709 scanned images was created of full-face sections of liver and kidney tissue specimens from male Wistar rats treated with 16 proprietary and reference compounds of known toxicity; additional biological materials from these treated animals were separately used to create 'omics data, that will ultimately be used to populate an integrated toxicological database. In respect to assessment of the digital slides, a web-enabled digital slide management system, Digital SlideServer (DSS), was employed to enable integration of the digital slide content into the 'omics database and to facilitate remote viewing by pathologists connected with the project. DSS also facilitated manual annotation of digital slides by the pathologists, specifically in relation to marking particular lesions of interest. Tissue microarrays (TMAs) were constructed from the specimens for the purpose of creating a repository of tissue from animals used in the study with a view to later-stage biomarker assessment. As the PredTox consortium itself aims to identify new biomarkers of toxicity, these TMAs will be a valuable means of validation. In summary, a large repository of histological images was created enabling the subsequent pathological analysis of samples through remote viewing and, along with the utilisation of TMA technology, will allow the validation of biomarkers identified by the PredTox consortium. The population of the PredTox database with these digitised images represents the creation of the first toxicological database integrating 'omics and preclinical data with histological images.
Marshall, Zack; Brunger, Fern; Welch, Vivian; Asghari, Shabnam; Kaposy, Chris
2018-02-26
This paper focuses on the collision of three factors: a growing emphasis on sharing research through open access publication, an increasing awareness of big data and its potential uses, and an engaged public interested in the privacy and confidentiality of their personal health information. One conceptual space where this collision is brought into sharp relief is with the open availability of patient medical photographs from peer-reviewed journal articles in the search results of online image databases such as Google Images. The aim of this study was to assess the availability of patient medical photographs from published journal articles in Google Images search results and the factors impacting this availability. We conducted a cross-sectional study using data from an evidence map of research with transgender, gender non-binary, and other gender diverse (trans) participants. For the original evidence map, a comprehensive search of 15 academic databases was developed in collaboration with a health sciences librarian. Initial search results produced 25,230 references after duplicates were removed. Eligibility criteria were established to include empirical research of any design that included trans participants or their personal information and that was published in English in peer-reviewed journals. We identified all articles published between 2008 and 2015 with medical photographs of trans participants. For each reference, images were individually numbered in order to track the total number of medical photographs. We used odds ratios (OR) to assess the association between availability of the clinical photograph on Google Images and the following factors: whether the article was openly available online (open access, Researchgate.net, or Academia.edu), whether the article included genital images, if the photographs were published in color, and whether the photographs were located on the journal article landing page. We identified 94 articles with medical photographs of trans participants, including a total of 605 photographs. Of the 94 publications, 35 (37%) included at least one medical photograph that was found on Google Images. The ability to locate the article freely online contributes to the availability of at least one image from the article on Google Images (OR 2.99, 95% CI 1.20-7.45). This is the first study to document the existence of medical photographs from peer-reviewed journals appearing in Google Images search results. Almost all of the images we searched for included sensitive photographs of patient genitals, chests, or breasts. Given that it is unlikely that patients consented to sharing their personal health information in these ways, this constitutes a risk to patient privacy. Based on the impact of current practices, revisions to informed consent policies and guidelines are required. ©Zack Marshall, Fern Brunger, Vivian Welch, Shabnam Asghari, Chris Kaposy. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 26.02.2018.
Trends in CT colonography: bibliometric analysis of the 100 most-cited articles.
Mohammed, Mohammed Fahim; Chahal, Tejbir; Gong, Bo; Bhulani, Nizar; O'Keefe, Michael; O'Connell, Timothy; Nicolaou, Savvas; Khosa, Faisal
2017-12-01
Our purpose was to identify the top 100 cited articles, which focused on CT colonography (CTC). This list could then be analysed to establish trends in CTC research while also identifying common characteristics of highly cited works. Web of Science search was used to create a database of scientific journals using our search terms. A total of 10,597 articles were returned from this search. Articles were included if they focused on diagnostic imaging, imaging technique, cost-effectiveness analysis, clinical use, patient preference or trends in CTC. Articles were ranked by citation count and screened by two attending radiologists. The following information was collected from each article: database citations, citations per year, year published, journal, authors, department affiliation, study type and design, statistical analysis, sample size, modality and topic. Citations for the top 100 articles ranged from 73 to 1179, and citations per year ranged from 4.5 to 84.21. Articles were published across 22 journals, most commonly Radiology (n = 37) and American Journal of Roentgenology (n = 19). Authors contributed from 1 to 20 articles. 19% of first authors were affiliated with a department other than radiology. Of the 100 articles, the most common topics were imaging technique (n = 40), diagnostic utility of imaging (n = 28) and clinical uses (n = 18). Our study provides intellectual milestones in CTC research, reflecting on the characteristics and quality of published literature. This work also provides the most influential references related to CTC and serves as a guide to the features of a citable paper in this field.
Robust Skull-Stripping Segmentation Based on Irrational Mask for Magnetic Resonance Brain Images.
Moldovanu, Simona; Moraru, Luminița; Biswas, Anjan
2015-12-01
This paper proposes a new method for simple, efficient, and robust removal of the non-brain tissues in MR images based on an irrational mask for filtration within a binary morphological operation framework. The proposed skull-stripping segmentation is based on two irrational 3 × 3 and 5 × 5 masks, having the sum of its weights equal to the transcendental number π value provided by the Gregory-Leibniz infinite series. It allows maintaining a lower rate of useful pixel loss. The proposed method has been tested in two ways. First, it has been validated as a binary method by comparing and contrasting with Otsu's, Sauvola's, Niblack's, and Bernsen's binary methods. Secondly, its accuracy has been verified against three state-of-the-art skull-stripping methods: the graph cuts method, the method based on Chan-Vese active contour model, and the simplex mesh and histogram analysis skull stripping. The performance of the proposed method has been assessed using the Dice scores, overlap and extra fractions, and sensitivity and specificity as statistical methods. The gold standard has been provided by two neurologist experts. The proposed method has been tested and validated on 26 image series which contain 216 images from two publicly available databases: the Whole Brain Atlas and the Internet Brain Segmentation Repository that include a highly variable sample population (with reference to age, sex, healthy/diseased). The approach performs accurately on both standardized databases. The main advantage of the proposed method is its robustness and speed.
A World Wide Web (WWW) server database engine for an organelle database, MitoDat.
Lemkin, P F; Chipperfield, M; Merril, C; Zullo, S
1996-03-01
We describe a simple database search engine "dbEngine" which may be used to quickly create a searchable database on a World Wide Web (WWW) server. Data may be prepared from spreadsheet programs (such as Excel, etc.) or from tables exported from relationship database systems. This Common Gateway Interface (CGI-BIN) program is used with a WWW server such as available commercially, or from National Center for Supercomputer Algorithms (NCSA) or CERN. Its capabilities include: (i) searching records by combinations of terms connected with ANDs or ORs; (ii) returning search results as hypertext links to other WWW database servers; (iii) mapping lists of literature reference identifiers to the full references; (iv) creating bidirectional hypertext links between pictures and the database. DbEngine has been used to support the MitoDat database (Mendelian and non-Mendelian inheritance associated with the Mitochondrion) on the WWW.
WFC3 Anomalies Flagged by the Quicklook Team
NASA Astrophysics Data System (ADS)
Gosmeyer, C. M.
2017-09-01
Like all detectors, the UVIS and IR detectors of the Wide Field Camera 3 (WFC3) on the Hubble Space Telescope are subject to detector and optical anomalies. Many of them can be corrected for or avoided with careful planning. We summarize, with examples, the various WFC3 anomalies, which when found are flagged by the WFC3 "Quicklook" team of daily image inspectors and stored in an internal database. We also give examples of known detector features and defects, and some non-standard observing modes. The aim of this report is (1) to educate users of WFC3 to more easily assess the quality of science images and (2) to serve as a reference for the WFC3 Quicklook team members in their daily visual inspections. This report was produced by C.M. Gosmeyer and The Quicklook Team.
AN ASSESSMENT OF GROUND TRUTH VARIABILITY USING A "VIRTUAL FIELD REFERENCE DATABASE"
A "Virtual Field Reference Database (VFRDB)" was developed from field measurment data that included location and time, physical attributes, flora inventory, and digital imagery (camera) documentation foy 1,01I sites in the Neuse River basin, North Carolina. The sampling f...
Huber, Lara
2011-06-01
In the neurosciences digital databases more and more are becoming important tools of data rendering and distributing. This development is due to the growing impact of imaging based trial design in cognitive neuroscience, including morphological as much as functional imaging technologies. As the case of the 'Laboratory of Neuro Imaging' (LONI) is showing, databases are attributed a specific epistemological power: Since the 1990s databasing is seen to foster the integration of neuroscientific data, although local regimes of data production, -manipulation and--interpretation are also challenging this development. Databasing in the neurosciences goes along with the introduction of new structures of integrating local data, hence establishing digital spaces of knowledge (epistemic spaces): At this stage, inherent norms of digital databases are affecting regimes of imaging-based trial design, for example clinical research into Alzheimer's disease.
Gao, Jun-Xue; Pei, Qiu-Yan; Li, Yun-Tao; Yang, Zhen-Juan
2015-06-01
The aim of this study was to create a database of anatomical ultrathin cross-sectional images of fetal hearts with different congenital heart diseases (CHDs) and preliminarily to investigate its clinical application. Forty Chinese fetal heart samples from induced labor due to different CHDs were cut transversely at 60-μm thickness. All thoracic organs were removed from the thoracic cavity after formalin fixation, embedded in optimum cutting temperature compound, and then frozen at -25°C for 2 hours. Subsequently, macro shots of the frozen serial sections were obtained using a digital camera in order to build a database of anatomical ultrathin cross-sectional images. Images in the database clearly displayed the fetal heart structures. After importing the images into three-dimensional software, the following functions could be realized: (1) based on the original database of transverse sections, databases of sagittal and coronal sections could be constructed; and (2) the original and constructed databases could be displayed continuously and dynamically, and rotated in arbitrary angles. They could also be displayed synchronically. The aforementioned functions of the database allowed for the retrieval of images and three-dimensional anatomy characteristics of the different fetal CHDs, and virtualization of fetal echocardiography findings. A database of 40 different cross-sectional fetal CHDs was established. An extensive database library of fetal CHDs, from which sonographers and students can study the anatomical features of fetal CHDs and virtualize fetal echocardiography findings via either centralized training or distance education, can be established in the future by accumulating further cases. Copyright © 2015. Published by Elsevier B.V.
Deeply learnt hashing forests for content based image retrieval in prostate MR images
NASA Astrophysics Data System (ADS)
Shah, Amit; Conjeti, Sailesh; Navab, Nassir; Katouzian, Amin
2016-03-01
Deluge in the size and heterogeneity of medical image databases necessitates the need for content based retrieval systems for their efficient organization. In this paper, we propose such a system to retrieve prostate MR images which share similarities in appearance and content with a query image. We introduce deeply learnt hashing forests (DL-HF) for this image retrieval task. DL-HF effectively leverages the semantic descriptiveness of deep learnt Convolutional Neural Networks. This is used in conjunction with hashing forests which are unsupervised random forests. DL-HF hierarchically parses the deep-learnt feature space to encode subspaces with compact binary code words. We propose a similarity preserving feature descriptor called Parts Histogram which is derived from DL-HF. Correlation defined on this descriptor is used as a similarity metric for retrieval from the database. Validations on publicly available multi-center prostate MR image database established the validity of the proposed approach. The proposed method is fully-automated without any user-interaction and is not dependent on any external image standardization like image normalization and registration. This image retrieval method is generalizable and is well-suited for retrieval in heterogeneous databases other imaging modalities and anatomies.
Integrating Digital Images into the Art and Art History Curriculum.
ERIC Educational Resources Information Center
Pitt, Sharon P.; Updike, Christina B.; Guthrie, Miriam E.
2002-01-01
Describes an Internet-based image database system connected to a flexible, in-class teaching and learning tool (the Madison Digital Image Database) developed at James Madison University to bring digital images to the arts and humanities classroom. Discusses content, copyright issues, ensuring system effectiveness, instructional impact, sharing the…
Infrared thermography based on artificial intelligence for carpal tunnel syndrome diagnosis.
Jesensek Papez, B; Palfy, M; Turk, Z
2008-01-01
Thermography for the measurement of surface temperatures is well known in industry, although is not established in medicine despite its safety, lack of pain and invasiveness, easy reproducibility, and low running costs. Promising results have been achieved in nerve entrapment syndromes, although thermography has never represented a real alternative to electromyography. Here an attempt is described to improve the diagnosis of carpal tunnel syndrome with thermography using a computer-based system employing artificial neural networks to analyse the images. Method reliability was tested on 112 images (depicting the dorsal and palmar sides of 26 healthy and 30 pathological hands), with the hand divided into 12 segments and compared relative to a reference. Palmar segments appeared to have no beneficial influence on classification outcome, whereas dorsal segments gave improved outcome with classification success rates near to or over 80%, and finger segments influenced by the median nerve appeared to be of greatest importance. These are preliminary results from a limited number of images and further research will be undertaken as our image database grows.
DeitY-TU face database: its design, multiple camera capturing, characteristics, and evaluation
NASA Astrophysics Data System (ADS)
Bhowmik, Mrinal Kanti; Saha, Kankan; Saha, Priya; Bhattacharjee, Debotosh
2014-10-01
The development of the latest face databases is providing researchers different and realistic problems that play an important role in the development of efficient algorithms for solving the difficulties during automatic recognition of human faces. This paper presents the creation of a new visual face database, named the Department of Electronics and Information Technology-Tripura University (DeitY-TU) face database. It contains face images of 524 persons belonging to different nontribes and Mongolian tribes of north-east India, with their anthropometric measurements for identification. Database images are captured within a room with controlled variations in illumination, expression, and pose along with variability in age, gender, accessories, make-up, and partial occlusion. Each image contains the combined primary challenges of face recognition, i.e., illumination, expression, and pose. This database also represents some new features: soft biometric traits such as mole, freckle, scar, etc., and facial anthropometric variations that may be helpful for researchers for biometric recognition. It also gives an equivalent study of the existing two-dimensional face image databases. The database has been tested using two baseline algorithms: linear discriminant analysis and principal component analysis, which may be used by other researchers as the control algorithm performance score.
Adelman, Marisa Rachel
2014-07-01
To describe novel innovations and techniques for the detection of high-grade dysplasia. Studies were identified through the PubMed database, spanning the last 10 years. The key words (["computerized colposcopy" or "digital colposcopy" or "spectroscopy" or "multispectral digital colposcopy" or "dynamic spectral imaging", or "electrical impedance spectroscopy" or "confocal endomicroscopy" or "confocal microscopy"or "optical coherence tomography"] and ["cervical dysplasia" or cervical precancer" or "cervix" or "cervical"]) were used. The inclusion criteria were published articles of original research referring to noncolposcopic evaluation of the cervix for the detection of cervical dysplasia. Only English-language articles from the past 10 years were included, in which the technologies were used in vivo, and sensitivities and specificities could be calculated. The single author reviewed the articles for inclusion. Primary search of the database yielded 59 articles, and secondary cross-reference yielded 12 articles. Thirty-two articles met the inclusion criteria. An instrument that globally assesses the cervix, such as computer-assisted colposcopy, optical spectroscopy, and dynamic spectral imaging, would provided the most comprehensive estimate of disease and is therefore best suited when treatment is preferred. Electrical impedance spectroscopy, confocal microscopy, and optical coherence tomography provide information at the cellular level to estimate histology and are therefore best suited when deferment of treatment is preferred. If a device is to eventually replace the colposcope, it will likely combine technologies to best meet the needs of the target population, and as such, no single instrument may prove to be universally appropriate. Analyses of false-positive rates, additional colposcopies and biopsies, cost, and absolute life-savings will be important when considering these technologies and are limited thus far.
Development and evaluation of oral reporting system for PACS.
Umeda, T; Inamura, K; Inamoto, K; Ikezoe, J; Kozuka, T; Kawase, I; Fujii, Y; Karasawa, H
1994-05-01
Experimental workstations for oral reporting and synchronized image filing have been developed and evaluated by radiologists and referring physicians. The file media is a 5.25-inch rewritable magneto-optical disk of 600-Mb capacity whose file format is in accordance with the IS&C specification. The results of evaluation tell that this system is superior to other existing methods of the same kind such as transcribing, dictating, handwriting, typewriting and key selections. The most significant advantage of the system is that images and their interpretation are never separated. The first practical application to the teaching file and the teaching conference is contemplated in the Osaka University Hospital. This system is a complete digital system in terms of images, voices and demographic data, so that on-line transmission, off-line communication or filing to any database will be easily realized in a PACS environment. We are developing an integrated system of a speech recognizer connected to this digitized oral system.
Aldridge, R Benjamin; Glodzik, Dominik; Ballerini, Lucia; Fisher, Robert B; Rees, Jonathan L
2011-05-01
Non-analytical reasoning is thought to play a key role in dermatology diagnosis. Considering its potential importance, surprisingly little work has been done to research whether similar identification processes can be supported in non-experts. We describe here a prototype diagnostic support software, which we have used to examine the ability of medical students (at the beginning and end of a dermatology attachment) and lay volunteers, to diagnose 12 images of common skin lesions. Overall, the non-experts using the software had a diagnostic accuracy of 98% (923/936) compared with 33% for the control group (215/648) (Wilcoxon p < 0.0001). We have demonstrated, within the constraints of a simplified clinical model, that novices' diagnostic scores are significantly increased by the use of a structured image database coupled with matching of index and referent images. The novices achieve this high degree of accuracy without any use of explicit definitions of likeness or rule-based strategies.
RefPrimeCouch—a reference gene primer CouchApp
Silbermann, Jascha; Wernicke, Catrin; Pospisil, Heike; Frohme, Marcus
2013-01-01
To support a quantitative real-time polymerase chain reaction standardization project, a new reference gene database application was required. The new database application was built with the explicit goal of simplifying not only the development process but also making the user interface more responsive and intuitive. To this end, CouchDB was used as the backend with a lightweight dynamic user interface implemented client-side as a one-page web application. Data entry and curation processes were streamlined using an OpenRefine-based workflow. The new RefPrimeCouch database application provides its data online under an Open Database License. Database URL: http://hpclife.th-wildau.de:5984/rpc/_design/rpc/view.html PMID:24368831
RefPrimeCouch--a reference gene primer CouchApp.
Silbermann, Jascha; Wernicke, Catrin; Pospisil, Heike; Frohme, Marcus
2013-01-01
To support a quantitative real-time polymerase chain reaction standardization project, a new reference gene database application was required. The new database application was built with the explicit goal of simplifying not only the development process but also making the user interface more responsive and intuitive. To this end, CouchDB was used as the backend with a lightweight dynamic user interface implemented client-side as a one-page web application. Data entry and curation processes were streamlined using an OpenRefine-based workflow. The new RefPrimeCouch database application provides its data online under an Open Database License. Database URL: http://hpclife.th-wildau.de:5984/rpc/_design/rpc/view.html.
Optics survivability support, volume 2
NASA Astrophysics Data System (ADS)
Wild, N.; Simpson, T.; Busdeker, A.; Doft, F.
1993-01-01
This volume of the Optics Survivability Support Final Report contains plots of all the data contained in the computerized Optical Glasses Database. All of these plots are accessible through the Database, but are included here as a convenient reference. The first three pages summarize the types of glass included with a description of the radiation source, test date, and the original data reference. This information is included in the database as a macro button labeled 'LLNL DATABASE'. Following this summary is an Abbe chart showing which glasses are included and where they lie as a function of nu(sub d) and n(sub d). This chart is also callable through the database as a macro button labeled 'ABBEC'.
Wright, T.L.; Takahashi, T.J.
1998-01-01
The Hawaii bibliographic database has been created to contain all of the literature, from 1779 to the present, pertinent to the volcanological history of the Hawaiian-Emperor volcanic chain. References are entered in a PC- and Macintosh-compatible EndNote Plus bibliographic database with keywords and abstracts or (if no abstract) with annotations as to content. Keywords emphasize location, discipline, process, identification of new chemical data or age determinations, and type of publication. The database is updated approximately three times a year and is available to upload from an ftp site. The bibliography contained 8460 references at the time this paper was submitted for publication. Use of the database greatly enhances the power and completeness of library searches for anyone interested in Hawaiian volcanism.
Nuclear Science References Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pritychenko, B., E-mail: pritychenko@bnl.gov; Běták, E.; Singh, B.
2014-06-15
The Nuclear Science References (NSR) database together with its associated Web interface, is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 210,000 articles since the beginning of nuclear science. The weekly-updated NSR database provides essential support for nuclear data evaluation, compilation and research activities. The principles of the database and Web application development and maintenance are described. Examples of nuclear structure, reaction and decay applications are specifically included. The complete NSR database is freely available at the websites of the National Nuclear Data Center (http://www.nndc.bnl.gov/nsr) and the International Atomic Energymore » Agency (http://www-nds.iaea.org/nsr)« less
Development of image and information management system for Korean standard brain
NASA Astrophysics Data System (ADS)
Chung, Soon Cheol; Choi, Do Young; Tack, Gye Rae; Sohn, Jin Hun
2004-04-01
The purpose of this study is to establish a reference for image acquisition for completing a standard brain for diverse Korean population, and to develop database management system that saves and manages acquired brain images and personal information of subjects. 3D MP-RAGE (Magnetization Prepared Rapid Gradient Echo) technique which has excellent Signal to Noise Ratio (SNR) and Contrast to Noise Ratio (CNR) as well as reduces image acquisition time was selected for anatomical image acquisition, and parameter values were obtained for the optimal image acquisition. Using these standards, image data of 121 young adults (early twenties) were obtained and stored in the system. System was designed to obtain, save, and manage not only anatomical image data but also subjects' basic demographic factors, medical history, handedness inventory, state-trait anxiety inventory, A-type personality inventory, self-assessment depression inventory, mini-mental state examination, intelligence test, and results of personality test via a survey questionnaire. Additionally this system was designed to have functions of saving, inserting, deleting, searching, and printing image data and personal information of subjects, and to have accessibility to them as well as automatic connection setup with ODBC. This newly developed system may have major contribution to the completion of a standard brain for diverse Korean population since it can save and manage their image data and personal information.
Interactive searching of facial image databases
NASA Astrophysics Data System (ADS)
Nicholls, Robert A.; Shepherd, John W.; Shepherd, Jean
1995-09-01
A set of psychological facial descriptors has been devised to enable computerized searching of criminal photograph albums. The descriptors have been used to encode image databased of up to twelve thousand images. Using a system called FACES, the databases are searched by translating a witness' verbal description into corresponding facial descriptors. Trials of FACES have shown that this coding scheme is more productive and efficient than searching traditional photograph albums. An alternative method of searching the encoded database using a genetic algorithm is currenly being tested. The genetic search method does not require the witness to verbalize a description of the target but merely to indicate a degree of similarity between the target and a limited selection of images from the database. The major drawback of FACES is that is requires a manual encoding of images. Research is being undertaken to automate the process, however, it will require an algorithm which can predict human descriptive values. Alternatives to human derived coding schemes exist using statistical classifications of images. Since databases encoded using statistical classifiers do not have an obvious direct mapping to human derived descriptors, a search method which does not require the entry of human descriptors is required. A genetic search algorithm is being tested for such a purpose.
NASA Astrophysics Data System (ADS)
Hagita, Norihiro; Sawaki, Minako
1995-03-01
Most conventional methods in character recognition extract geometrical features such as stroke direction, connectivity of strokes, etc., and compare them with reference patterns in a stored dictionary. Unfortunately, geometrical features are easily degraded by blurs, stains and the graphical background designs used in Japanese newspaper headlines. This noise must be removed before recognition commences, but no preprocessing method is completely accurate. This paper proposes a method for recognizing degraded characters and characters printed on graphical background designs. This method is based on the binary image feature method and uses binary images as features. A new similarity measure, called the complementary similarity measure, is used as a discriminant function. It compares the similarity and dissimilarity of binary patterns with reference dictionary patterns. Experiments are conducted using the standard character database ETL-2 which consists of machine-printed Kanji, Hiragana, Katakana, alphanumeric, an special characters. The results show that this method is much more robust against noise than the conventional geometrical feature method. It also achieves high recognition rates of over 92% for characters with textured foregrounds, over 98% for characters with textured backgrounds, over 98% for outline fonts, and over 99% for reverse contrast characters.
APPLICATION OF A "VITURAL FIELD REFERENCE DATABASE" TO ASSESS LAND-COVER MAP ACCURACIES
An accuracy assessment was performed for the Neuse River Basin, NC land-cover/use
(LCLU) mapping results using a "Virtual Field Reference Database (VFRDB)". The VFRDB was developed using field measurement and digital imagery (camera) data collected at 1,409 sites over a perio...
USDA National Nutrient Database for Standard Reference, release 28
USDA-ARS?s Scientific Manuscript database
The USDA National Nutrient Database for Standard Reference, Release 28 contains data for nearly 8,800 food items for up to 150 food components. SR28 replaces the previous release, SR27, originally issued in August 2014. Data in SR28 supersede values in the printed handbooks and previous electronic...
Conversion of a traditional image archive into an image resource on compact disc.
Andrew, S M; Benbow, E W
1997-01-01
The conversion of a traditional archive of pathology images was organised on 35 mm slides into a database of images stored on compact disc (CD-ROM), and textual descriptions were added to each image record. Students on a didactic pathology course found this resource useful as an aid to revision, despite relative computer illiteracy, and it is anticipated that students on a new problem based learning course, which incorporates experience with information technology, will benefit even more readily when they use the database as an educational resource. A text and image database on CD-ROM can be updated repeatedly, and the content manipulated to reflect the content and style of the courses it supports. Images PMID:9306931
Thermodynamics of Enzyme-Catalyzed Reactions Database
National Institute of Standards and Technology Data Gateway
SRD 74 Thermodynamics of Enzyme-Catalyzed Reactions Database (Web, free access) The Thermodynamics of Enzyme-Catalyzed Reactions Database contains thermodynamic data on enzyme-catalyzed reactions that have been recently published in the Journal of Physical and Chemical Reference Data (JPCRD). For each reaction the following information is provided: the reference for the data, the reaction studied, the name of the enzyme used and its Enzyme Commission number, the method of measurement, the data and an evaluation thereof.
Lowe, H. J.
1993-01-01
This paper describes Image Engine, an object-oriented, microcomputer-based, multimedia database designed to facilitate the storage and retrieval of digitized biomedical still images, video, and text using inexpensive desktop computers. The current prototype runs on Apple Macintosh computers and allows network database access via peer to peer file sharing protocols. Image Engine supports both free text and controlled vocabulary indexing of multimedia objects. The latter is implemented using the TView thesaurus model developed by the author. The current prototype of Image Engine uses the National Library of Medicine's Medical Subject Headings (MeSH) vocabulary (with UMLS Meta-1 extensions) as its indexing thesaurus. PMID:8130596
A Relational Database System for Student Use.
ERIC Educational Resources Information Center
Fertuck, Len
1982-01-01
Describes an APL implementation of a relational database system suitable for use in a teaching environment in which database development and database administration are studied, and discusses the functions of the user and the database administrator. An appendix illustrating system operation and an eight-item reference list are attached. (Author/JL)
Fiber pixelated image database
NASA Astrophysics Data System (ADS)
Shinde, Anant; Perinchery, Sandeep Menon; Matham, Murukeshan Vadakke
2016-08-01
Imaging of physically inaccessible parts of the body such as the colon at micron-level resolution is highly important in diagnostic medical imaging. Though flexible endoscopes based on the imaging fiber bundle are used for such diagnostic procedures, their inherent honeycomb-like structure creates fiber pixelation effects. This impedes the observer from perceiving the information from an image captured and hinders the direct use of image processing and machine intelligence techniques on the recorded signal. Significant efforts have been made by researchers in the recent past in the development and implementation of pixelation removal techniques. However, researchers have often used their own set of images without making source data available which subdued their usage and adaptability universally. A database of pixelated images is the current requirement to meet the growing diagnostic needs in the healthcare arena. An innovative fiber pixelated image database is presented, which consists of pixelated images that are synthetically generated and experimentally acquired. Sample space encompasses test patterns of different scales, sizes, and shapes. It is envisaged that this proposed database will alleviate the current limitations associated with relevant research and development and would be of great help for researchers working on comb structure removal algorithms.
SU-E-J-112: Intensity-Based Pulmonary Image Registration: An Evaluation Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, F; Meyer, J; Sandison, G
2015-06-15
Purpose: Accurate alignment of thoracic CT images is essential for dose tracking and to safely implement adaptive radiotherapy in lung cancers. At the same time it is challenging given the highly elastic nature of lung tissue deformations. The objective of this study was to assess the performances of three state-of-art intensity-based algorithms in terms of their ability to register thoracic CT images subject to affine, barrel, and sinusoid transformation. Methods: Intensity similarity measures of the evaluated algorithms contained sum-of-squared difference (SSD), local mutual information (LMI), and residual complexity (RC). Five thoracic CT scans obtained from the EMPIRE10 challenge database weremore » included and served as reference images. Each CT dataset was distorted by realistic affine, barrel, and sinusoid transformations. Registration performances of the three algorithms were evaluated for each distortion type in terms of intensity root mean square error (IRMSE) between the reference and registered images in the lung regions. Results: For affine distortions, the three algorithms differed significantly in registration of thoracic images both visually and nominally in terms of IRMSE with a mean of 0.011 for SSD, 0.039 for RC, and 0.026 for LMI (p<0.01; Kruskal-Wallis test). For barrel distortion, the three algorithms showed nominally no significant difference in terms of IRMSE with a mean of 0.026 for SSD, 0.086 for RC, and 0.054 for LMI (p=0.16) . A significant difference was seen for sinusoid distorted thoracic CT data with mean lung IRMSE of 0.039 for SSD, 0.092 for RC, and 0.035 for LMI (p=0.02). Conclusion: Pulmonary deformations might vary to a large extent in nature in a daily clinical setting due to factors ranging from anatomy variations to respiratory motion to image quality. It can be appreciated from the results of the present study that the suitability of application of a particular algorithm for pulmonary image registration is deformation-dependent.« less
QBIC project: querying images by content, using color, texture, and shape
NASA Astrophysics Data System (ADS)
Niblack, Carlton W.; Barber, Ron; Equitz, Will; Flickner, Myron D.; Glasman, Eduardo H.; Petkovic, Dragutin; Yanker, Peter; Faloutsos, Christos; Taubin, Gabriel
1993-04-01
In the query by image content (QBIC) project we are studying methods to query large on-line image databases using the images' content as the basis of the queries. Examples of the content we use include color, texture, and shape of image objects and regions. Potential applications include medical (`Give me other images that contain a tumor with a texture like this one'), photo-journalism (`Give me images that have blue at the top and red at the bottom'), and many others in art, fashion, cataloging, retailing, and industry. Key issues include derivation and computation of attributes of images and objects that provide useful query functionality, retrieval methods based on similarity as opposed to exact match, query by image example or user drawn image, the user interfaces, query refinement and navigation, high dimensional database indexing, and automatic and semi-automatic database population. We currently have a prototype system written in X/Motif and C running on an RS/6000 that allows a variety of queries, and a test database of over 1000 images and 1000 objects populated from commercially available photo clip art images. In this paper we present the main algorithms for color texture, shape and sketch query that we use, show example query results, and discuss future directions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Y; Medin, P; Yordy, J
2014-06-01
Purpose: To present a strategy to integrate the imaging database of a VERO unit with a treatment management system (TMS) to improve clinical workflow and consolidate image data to facilitate clinical quality control and documentation. Methods: A VERO unit is equipped with both kV and MV imaging capabilities for IGRT treatments. It has its own imaging database behind a firewall. It has been a challenge to transfer images on this unit to a TMS in a radiation therapy clinic so that registered images can be reviewed remotely with an approval or rejection record. In this study, a software system, iPump-VERO,more » was developed to connect VERO and a TMS in our clinic. The patient database folder on the VERO unit was mapped to a read-only folder on a file server outside VERO firewall. The application runs on a regular computer with the read access to the patient database folder. It finds the latest registered images and fuses them in one of six predefined patterns before sends them via DICOM connection to the TMS. The residual image registration errors will be overlaid on the fused image to facilitate image review. Results: The fused images of either registered kV planar images or CBCT images are fully DICOM compatible. A sentinel module is built to sense new registered images with negligible computing resources from the VERO ExacTrac imaging computer. It takes a few seconds to fuse registered images and send them to the TMS. The whole process is automated without any human intervention. Conclusion: Transferring images in DICOM connection is the easiest way to consolidate images of various sources in your TMS. Technically the attending does not have to go to the VERO treatment console to review image registration prior delivery. It is a useful tool for a busy clinic with a VERO unit.« less
Dalton, J.B.; Bove, D.J.; Mladinich, C.S.
2005-01-01
Visible-wavelength and near-infrared image cubes of the Animas River watershed in southwestern Colorado have been acquired by the Jet Propulsion Laboratory's Airborne Visible and InfraRed Imaging Spectrometer (AVIRIS) instrument and processed using the U.S. Geological Survey Tetracorder v3.6a2 implementation. The Tetracorder expert system utilizes a spectral reference library containing more than 400 laboratory and field spectra of end-member minerals, mineral mixtures, vegetation, manmade materials, atmospheric gases, and additional substances to generate maps of mineralogy, vegetation, snow, and other material distributions. Major iron-bearing, clay, mica, carbonate, sulfate, and other minerals were identified, among which are several minerals associated with acid rock drainage, including pyrite, jarosite, alunite, and goethite. Distributions of minerals such as calcite and chlorite indicate a relationship between acid-neutralizing assemblages and stream geochemistry within the watershed. Images denoting material distributions throughout the watershed have been orthorectified against digital terrain models to produce georeferenced image files suitable for inclusion in Geographic Information System databases. Results of this study are of use to land managers, stakeholders, and researchers interested in understanding a number of characteristics of the Animas River watershed.
[Expert systems and automatic diagnostic systems in histopathology--a review].
Tamai, S
1999-02-01
In this decade, the pathological information system has gradually been settled in many hospitals in Japan. Pathological reports and images are now digitized and managed in the database, and are referred by clinicians at the peripherals. Tele-pathology is also developing; and its users are increasing. However, in many occasions, the problem solving in diagnostic pathology is completely dependent on the solo-pathologist. Considering the need for timely and efficient supports to the solo-pathologist, I reviewed the papers on the knowledge-based interactive expert systems. The interpretations of the histopathological images are dependent on the pathologist, and these expert systems have been evaluated as "educational". With the view of the success in the cytological screening, the development of "image-analysis-based" automatic "histopathological image" classifier has been on ongoing challenges. Our 3 years experience of the development of the pathological image classifier using the artificial neural networks technology is briefly presented. This classifier provides us a "fitting rate" for the individual diagnostic pattern of the breast tumors, such as "fibroadenoma pattern". The diagnosis assisting system with computer technology should provide pathologists, especially solo-pathologists, a useful tool for the quality assurance and improvement of pathological diagnosis.
Wavelet Types Comparison for Extracting Iris Feature Based on Energy Compaction
NASA Astrophysics Data System (ADS)
Rizal Isnanto, R.
2015-06-01
Human iris has a very unique pattern which is possible to be used as a biometric recognition. To identify texture in an image, texture analysis method can be used. One of method is wavelet that extract the image feature based on energy. Wavelet transforms used are Haar, Daubechies, Coiflets, Symlets, and Biorthogonal. In the research, iris recognition based on five mentioned wavelets was done and then comparison analysis was conducted for which some conclusions taken. Some steps have to be done in the research. First, the iris image is segmented from eye image then enhanced with histogram equalization. The features obtained is energy value. The next step is recognition using normalized Euclidean distance. Comparison analysis is done based on recognition rate percentage with two samples stored in database for reference images. After finding the recognition rate, some tests are conducted using Energy Compaction for all five types of wavelets above. As the result, the highest recognition rate is achieved using Haar, whereas for coefficients cutting for C(i) < 0.1, Haar wavelet has a highest percentage, therefore the retention rate or significan coefficient retained for Haaris lower than other wavelet types (db5, coif3, sym4, and bior2.4)
NOAA Data Rescue of Key Solar Databases and Digitization of Historical Solar Images
NASA Astrophysics Data System (ADS)
Coffey, H. E.
2006-08-01
Over a number of years, the staff at NOAA National Geophysical Data Center (NGDC) has worked to rescue key solar databases by converting them to digital format and making them available via the World Wide Web. NOAA has had several data rescue programs where staff compete for funds to rescue important and critical historical data that are languishing in archives and at risk of being lost due to deteriorating condition, loss of any metadata or descriptive text that describe the databases, lack of interest or funding in maintaining databases, etc. The Solar-Terrestrial Physics Division at NGDC was able to obtain funds to key in some critical historical tabular databases. Recently the NOAA Climate Database Modernization Program (CDMP) funded a project to digitize historical solar images, producing a large online database of historical daily full disk solar images. The images include the wavelengths Calcium K, Hydrogen Alpha, and white light photos, as well as sunspot drawings and the comprehensive drawings of a multitude of solar phenomena on one daily map (Fraunhofer maps and Wendelstein drawings). Included in the digitization are high resolution solar H-alpha images taken at the Boulder Solar Observatory 1967-1984. The scanned daily images document many phases of solar activity, from decadal variation to rotational variation to daily changes. Smaller versions are available online. Larger versions are available by request. See http://www.ngdc.noaa.gov/stp/SOLAR/ftpsolarimages.html. The tabular listings and solar imagery will be discussed.
New Software for Ensemble Creation in the Spitzer-Space-Telescope Operations Database
NASA Technical Reports Server (NTRS)
Laher, Russ; Rector, John
2004-01-01
Some of the computer pipelines used to process digital astronomical images from NASA's Spitzer Space Telescope require multiple input images, in order to generate high-level science and calibration products. The images are grouped into ensembles according to well documented ensemble-creation rules by making explicit associations in the operations Informix database at the Spitzer Science Center (SSC). The advantage of this approach is that a simple database query can retrieve the required ensemble of pipeline input images. New and improved software for ensemble creation has been developed. The new software is much faster than the existing software because it uses pre-compiled database stored-procedures written in Informix SPL (SQL programming language). The new software is also more flexible because the ensemble creation rules are now stored in and read from newly defined database tables. This table-driven approach was implemented so that ensemble rules can be inserted, updated, or deleted without modifying software.
Code of Federal Regulations, 2012 CFR
2012-10-01
... TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.4 Requirements. (a) National Transit Database Reporting System... from the National Transit Database Web site located at http://www.ntdprogram.gov. These reference... Transit Database Web site and a notice of any significant changes to the reporting requirements specified...
Code of Federal Regulations, 2011 CFR
2011-10-01
... TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.4 Requirements. (a) National Transit Database Reporting System... from the National Transit Database Web site located at http://www.ntdprogram.gov. These reference... Transit Database Web site and a notice of any significant changes to the reporting requirements specified...
Code of Federal Regulations, 2010 CFR
2010-10-01
... TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.4 Requirements. (a) National Transit Database Reporting System... from the National Transit Database Web site located at http://www.ntdprogram.gov. These reference... Transit Database Web site and a notice of any significant changes to the reporting requirements specified...
Code of Federal Regulations, 2014 CFR
2014-10-01
... TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.4 Requirements. (a) National Transit Database Reporting System... from the National Transit Database Web site located at http://www.ntdprogram.gov. These reference... Transit Database Web site and a notice of any significant changes to the reporting requirements specified...
Code of Federal Regulations, 2013 CFR
2013-10-01
... TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.4 Requirements. (a) National Transit Database Reporting System... from the National Transit Database Web site located at http://www.ntdprogram.gov. These reference... Transit Database Web site and a notice of any significant changes to the reporting requirements specified...
An Online Resource for Flight Test Safety Planning
NASA Technical Reports Server (NTRS)
Lewis, Greg
2007-01-01
A viewgraph presentation describing an online database for flight test safety techniques is shown. The topics include: 1) Goal; 2) Test Hazard Analyses; 3) Online Database Background; 4) Data Gathering; 5) NTPS Role; 6) Organizations; 7) Hazard Titles; 8) FAR Paragraphs; 9) Maneuver Name; 10) Identified Hazard; 11) Matured Hazard Titles; 12) Loss of Control Causes; 13) Mitigations; 14) Database Now Open to the Public; 15) FAR Reference Search; 16) Record Field Search; 17) Keyword Search; and 18) Results of FAR Reference Search.
Ryan, Michael C.; Ostmo, Susan; Jonas, Karyn; Berrocal, Audina; Drenser, Kimberly; Horowitz, Jason; Lee, Thomas C.; Simmons, Charles; Martinez-Castellanos, Maria-Ana; Chan, R.V. Paul; Chiang, Michael F.
2014-01-01
Information systems managing image-based data for telemedicine or clinical research applications require a reference standard representing the correct diagnosis. Accurate reference standards are difficult to establish because of imperfect agreement among physicians, and discrepancies between clinical vs. image-based diagnosis. This study is designed to describe the development and evaluation of reference standards for image-based diagnosis, which combine diagnostic impressions of multiple image readers with the actual clinical diagnoses. We show that agreement between image reading and clinical examinations was imperfect (689 [32%] discrepancies in 2148 image readings), as was inter-reader agreement (kappa 0.490-0.652). This was improved by establishing an image-based reference standard defined as the majority diagnosis given by three readers (13% discrepancies with image readers). It was further improved by establishing an overall reference standard that incorporated the clinical diagnosis (10% discrepancies with image readers). These principles of establishing reference standards may be applied to improve robustness of real-world systems supporting image-based diagnosis. PMID:25954463
EPA's Toxicity Reference Databases (ToxRefDB) was developed by the National Center for Computational Toxicology in partnership with EPA's Office of Pesticide Programs, to store data derived from in vivo animal toxicity studies [www.epa.gov/ncct/toxrefdb/]. The initial build of To...
USDA National Nutrient Database for Standard Reference, Release 25
USDA-ARS?s Scientific Manuscript database
The USDA National Nutrient Database for Standard Reference, Release 25(SR25)contains data for over 8,100 food items for up to 146 food components. It replaces the previous release, SR24, issued in September 2011. Data in SR25 supersede values in the printed handbooks and previous electronic releas...
USDA National Nutrient Database for Standard Reference, Release 24
USDA-ARS?s Scientific Manuscript database
The USDA Nutrient Database for Standard Reference, Release 24 contains data for over 7,900 food items for up to 146 food components. It replaces the previous release, SR23, issued in September 2010. Data in SR24 supersede values in the printed Handbooks and previous electronic releases of the databa...
Dodd, Lori E; Wagner, Robert F; Armato, Samuel G; McNitt-Gray, Michael F; Beiden, Sergey; Chan, Heang-Ping; Gur, David; McLennan, Geoffrey; Metz, Charles E; Petrick, Nicholas; Sahiner, Berkman; Sayre, Jim
2004-04-01
Cancer of the lung and bronchus is the leading fatal malignancy in the United States. Five-year survival is low, but treatment of early stage disease considerably improves chances of survival. Advances in multidetector-row computed tomography technology provide detection of smaller lung nodules and offer a potentially effective screening tool. The large number of images per exam, however, requires considerable radiologist time for interpretation and is an impediment to clinical throughput. Thus, computer-aided diagnosis (CAD) methods are needed to assist radiologists with their decision making. To promote the development of CAD methods, the National Cancer Institute formed the Lung Image Database Consortium (LIDC). The LIDC is charged with developing the consensus and standards necessary to create an image database of multidetector-row computed tomography lung images as a resource for CAD researchers. To develop such a prospective database, its potential uses must be anticipated. The ultimate applications will influence the information that must be included along with the images, the relevant measures of algorithm performance, and the number of required images. In this article we outline assessment methodologies and statistical issues as they relate to several potential uses of the LIDC database. We review methods for performance assessment and discuss issues of defining "truth" as well as the complications that arise when truth information is not available. We also discuss issues about sizing and populating a database.
Design of a diagnostic encyclopaedia using AIDA.
van Ginneken, A M; Smeulders, A W; Jansen, W
1987-01-01
Diagnostic Encyclopaedia Workstation (DEW) is the name of a digital encyclopaedia constructed to contain reference knowledge with respect to the pathology of the ovary. Comparing DEW with the common sources of reference knowledge (i.e. books) leads to the following advantages of DEW: it contains more verbal knowledge, pictures and case histories, and it offers information adjusted to the needs of the user. Based on an analysis of the structure of this reference knowledge we have chosen AIDA to develop a relational database and we use a video-disc player to contain the pictorial part of the database. The system consists of a database input version and a read-only run version. The design of the database input version is discussed. Reference knowledge for ovary pathology requires 1-3 Mbytes of memory. At present 15% of this amount is available. The design of the run version is based on an analysis of which information must necessarily be specified to the system by the user to access a desired item of information. Finally, the use of AIDA in constructing DEW is evaluated.
Perceptual quality prediction on authentically distorted images using a bag of features approach
Ghadiyaram, Deepti; Bovik, Alan C.
2017-01-01
Current top-performing blind perceptual image quality prediction models are generally trained on legacy databases of human quality opinion scores on synthetically distorted images. Therefore, they learn image features that effectively predict human visual quality judgments of inauthentic and usually isolated (single) distortions. However, real-world images usually contain complex composite mixtures of multiple distortions. We study the perceptually relevant natural scene statistics of such authentically distorted images in different color spaces and transform domains. We propose a “bag of feature maps” approach that avoids assumptions about the type of distortion(s) contained in an image and instead focuses on capturing consistencies—or departures therefrom—of the statistics of real-world images. Using a large database of authentically distorted images, human opinions of them, and bags of features computed on them, we train a regressor to conduct image quality prediction. We demonstrate the competence of the features toward improving automatic perceptual quality prediction by testing a learned algorithm using them on a benchmark legacy database as well as on a newly introduced distortion-realistic resource called the LIVE In the Wild Image Quality Challenge Database. We extensively evaluate the perceptual quality prediction model and algorithm and show that it is able to achieve good-quality prediction power that is better than other leading models. PMID:28129417
Building structural similarity database for metric learning
NASA Astrophysics Data System (ADS)
Jin, Guoxin; Pappas, Thrasyvoulos N.
2015-03-01
We propose a new approach for constructing databases for training and testing similarity metrics for structurally lossless image compression. Our focus is on structural texture similarity (STSIM) metrics and the matched-texture compression (MTC) approach. We first discuss the metric requirements for structurally lossless compression, which differ from those of other applications such as image retrieval, classification, and understanding. We identify "interchangeability" as the key requirement for metric performance, and partition the domain of "identical" textures into three regions, of "highest," "high," and "good" similarity. We design two subjective tests for data collection, the first relies on ViSiProG to build a database of "identical" clusters, and the second builds a database of image pairs with the "highest," "high," "good," and "bad" similarity labels. The data for the subjective tests is generated during the MTC encoding process, and consist of pairs of candidate and target image blocks. The context of the surrounding image is critical for training the metrics to detect lighting discontinuities, spatial misalignments, and other border artifacts that have a noticeable effect on perceptual quality. The identical texture clusters are then used for training and testing two STSIM metrics. The labelled image pair database will be used in future research.
Location-Driven Image Retrieval for Images Collected by a Mobile Robot
NASA Astrophysics Data System (ADS)
Tanaka, Kanji; Hirayama, Mitsuru; Okada, Nobuhiro; Kondo, Eiji
Mobile robot teleoperation is a method for a human user to interact with a mobile robot over time and distance. Successful teleoperation depends on how well images taken by the mobile robot are visualized to the user. To enhance the efficiency and flexibility of the visualization, an image retrieval system on such a robot’s image database would be very useful. The main difference of the robot’s image database from standard image databases is that various relevant images exist due to variety of viewing conditions. The main contribution of this paper is to propose an efficient retrieval approach, named location-driven approach, utilizing correlation between visual features and real world locations of images. Combining the location-driven approach with the conventional feature-driven approach, our goal can be viewed as finding an optimal classifier between relevant and irrelevant feature-location pairs. An active learning technique based on support vector machine is extended for this aim.
Palmprint verification using Lagrangian decomposition and invariant interest points
NASA Astrophysics Data System (ADS)
Gupta, P.; Rattani, A.; Kisku, D. R.; Hwang, C. J.; Sing, J. K.
2011-06-01
This paper presents a palmprint based verification system using SIFT features and Lagrangian network graph technique. We employ SIFT for feature extraction from palmprint images whereas the region of interest (ROI) which has been extracted from wide palm texture at the preprocessing stage, is considered for invariant points extraction. Finally, identity is established by finding permutation matrix for a pair of reference and probe palm graphs drawn on extracted SIFT features. Permutation matrix is used to minimize the distance between two graphs. The propsed system has been tested on CASIA and IITK palmprint databases and experimental results reveal the effectiveness and robustness of the system.
Matlashov, Andrei Nikolaevich; Urbaitis, Algis V.; Savukov, Igor Mykhaylovich; Espy, Michelle A.; Volegov, Petr Lvovich; Kraus, Jr., Robert Henry
2013-03-05
Method comprising obtaining an NMR measurement from a sample wherein an ultra-low field NMR system probes the sample and produces the NMR measurement and wherein a sampling temperature, prepolarizing field, and measurement field are known; detecting the NMR measurement by means of inductive coils; analyzing the NMR measurement to obtain at least one measurement feature wherein the measurement feature comprises T1, T2, T1.rho., or the frequency dependence thereof; and, searching for the at least one measurement feature within a database comprising NMR reference data for at least one material to determine if the sample comprises a material of interest.
GMOMETHODS: the European Union database of reference methods for GMO analysis.
Bonfini, Laura; Van den Bulcke, Marc H; Mazzara, Marco; Ben, Enrico; Patak, Alexandre
2012-01-01
In order to provide reliable and harmonized information on methods for GMO (genetically modified organism) analysis we have published a database called "GMOMETHODS" that supplies information on PCR assays validated according to the principles and requirements of ISO 5725 and/or the International Union of Pure and Applied Chemistry protocol. In addition, the database contains methods that have been verified by the European Union Reference Laboratory for Genetically Modified Food and Feed in the context of compliance with an European Union legislative act. The web application provides search capabilities to retrieve primers and probes sequence information on the available methods. It further supplies core data required by analytical labs to carry out GM tests and comprises information on the applied reference material and plasmid standards. The GMOMETHODS database currently contains 118 different PCR methods allowing identification of 51 single GM events and 18 taxon-specific genes in a sample. It also provides screening assays for detection of eight different genetic elements commonly used for the development of GMOs. The application is referred to by the Biosafety Clearing House, a global mechanism set up by the Cartagena Protocol on Biosafety to facilitate the exchange of information on Living Modified Organisms. The publication of the GMOMETHODS database can be considered an important step toward worldwide standardization and harmonization in GMO analysis.
Architecture for biomedical multimedia information delivery on the World Wide Web
NASA Astrophysics Data System (ADS)
Long, L. Rodney; Goh, Gin-Hua; Neve, Leif; Thoma, George R.
1997-10-01
Research engineers at the National Library of Medicine are building a prototype system for the delivery of multimedia biomedical information on the World Wide Web. This paper discuses the architecture and design considerations for the system, which will be used initially to make images and text from the third National Health and Nutrition Examination Survey (NHANES) publicly available. We categorized our analysis as follows: (1) fundamental software tools: we analyzed trade-offs among use of conventional HTML/CGI, X Window Broadway, and Java; (2) image delivery: we examined the use of unconventional TCP transmission methods; (3) database manager and database design: we discuss the capabilities and planned use of the Informix object-relational database manager and the planned schema for the HNANES database; (4) storage requirements for our Sun server; (5) user interface considerations; (6) the compatibility of the system with other standard research and analysis tools; (7) image display: we discuss considerations for consistent image display for end users. Finally, we discuss the scalability of the system in terms of incorporating larger or more databases of similar data, and the extendibility of the system for supporting content-based retrieval of biomedical images. The system prototype is called the Web-based Medical Information Retrieval System. An early version was built as a Java applet and tested on Unix, PC, and Macintosh platforms. This prototype used the MiniSQL database manager to do text queries on a small database of records of participants in the second NHANES survey. The full records and associated x-ray images were retrievable and displayable on a standard Web browser. A second version has now been built, also a Java applet, using the MySQL database manager.
A complete database for the Einstein imaging proportional counter
NASA Technical Reports Server (NTRS)
Helfand, David J.
1991-01-01
A complete database for the Einstein Imaging Proportional Counter (IPC) was completed. The original data that makes up the archive is described as well as the structure of the database, the Op-Ed analysis system, the technical advances achieved relative to the analysis of (IPC) data, the data products produced, and some uses to which the database has been put by scientists outside Columbia University over the past year.
DDx: diagnostic assistance for the radiologist using PACS
NASA Astrophysics Data System (ADS)
Haynor, David R.
1993-09-01
A potentially valuable tool in medical imaging is the development, and integration with PACS, of systems which enhance the interpretive accuracy of the user -- his ability, given a set of findings (in the broad sense, including clinical information about the patient as well as characteristics of the lesion being analyzed), to assign the proper disease label, or diagnosis, to them. Such systems, which we call here interpretive tools (IT), contain a variety of types of information about diseases and their radiologic diagnosis. They can contain information about large numbers of diseases, including statistical information (incidence, characteristic anatomical locations, association with age and gender or with other diseases, probabilities of various findings given a disease), textual information (description of diseases, treatment, literature references, lists of other entities that might be confused with the disease of interest, additional diagnostic points that may not be represented, or even representable, within IT), and image-based information (typical and atypical examples for each entity and radiographic finding, examples of normal anatomy). These databases can be used both for teaching purposes and as a tool for improving interpretive accuracy [Swett, 1987]. This paper describes some of the requirements for these databases and then discusses early work on the implementation of DDx, an IT whose domain is neuroradiology.
Kline, Timothy L; Korfiatis, Panagiotis; Edwards, Marie E; Blais, Jaime D; Czerwiec, Frank S; Harris, Peter C; King, Bernard F; Torres, Vicente E; Erickson, Bradley J
2017-08-01
Deep learning techniques are being rapidly applied to medical imaging tasks-from organ and lesion segmentation to tissue and tumor classification. These techniques are becoming the leading algorithmic approaches to solve inherently difficult image processing tasks. Currently, the most critical requirement for successful implementation lies in the need for relatively large datasets that can be used for training the deep learning networks. Based on our initial studies of MR imaging examinations of the kidneys of patients affected by polycystic kidney disease (PKD), we have generated a unique database of imaging data and corresponding reference standard segmentations of polycystic kidneys. In the study of PKD, segmentation of the kidneys is needed in order to measure total kidney volume (TKV). Automated methods to segment the kidneys and measure TKV are needed to increase measurement throughput and alleviate the inherent variability of human-derived measurements. We hypothesize that deep learning techniques can be leveraged to perform fast, accurate, reproducible, and fully automated segmentation of polycystic kidneys. Here, we describe a fully automated approach for segmenting PKD kidneys within MR images that simulates a multi-observer approach in order to create an accurate and robust method for the task of segmentation and computation of TKV for PKD patients. A total of 2000 cases were used for training and validation, and 400 cases were used for testing. The multi-observer ensemble method had mean ± SD percent volume difference of 0.68 ± 2.2% compared with the reference standard segmentations. The complete framework performs fully automated segmentation at a level comparable with interobserver variability and could be considered as a replacement for the task of segmentation of PKD kidneys by a human.
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
Image-guided decision support system for pulmonary nodule classification in 3D thoracic CT images
NASA Astrophysics Data System (ADS)
Kawata, Yoshiki; Niki, Noboru; Ohmatsu, Hironobu; Kusumoto, Masahiro; Kakinuma, Ryutaro; Mori, Kiyoshi; Yamada, Kozo; Nishiyama, Hiroyuki; Eguchi, Kenji; Kaneko, Masahiro; Moriyama, Noriyuki
2004-05-01
The purpose of this study is to develop an image-guided decision support system that assists decision-making in clinical differential diagnosis of pulmonary nodules. This approach retrieves and displays nodules that exhibit morphological and internal profiles consistent to the nodule in question. It uses a three-dimensional (3-D) CT image database of pulmonary nodules for which diagnosis is known. In order to build the system, there are following issues that should be solved: 1) to categorize the nodule database with respect to morphological and internal features, 2) to quickly search nodule images similar to an indeterminate nodule from a large database, and 3) to reveal malignancy likelihood computed by using similar nodule images. Especially, the first problem influences the design of other issues. The successful categorization of nodule pattern might lead physicians to find important cues that characterize benign and malignant nodules. This paper focuses on an approach to categorize the nodule database with respect to nodule shape and CT density patterns inside nodule.
Tsybovskii, I S; Veremeichik, V M; Kotova, S A; Kritskaya, S V; Evmenenko, S A; Udina, I G
2017-02-01
For the Republic of Belarus, development of a forensic reference database on the basis of 18 autosomal microsatellites (STR) using a population dataset (N = 1040), “familial” genotypic dataset (N = 2550) obtained from expertise performance of paternity testing, and a dataset of genotypes from a criminal registration database (N = 8756) is described. Population samples studied consist of 80% ethnic Belarusians and 20% individuals of other nationality or of mixed origin (by questionnaire data). Genotypes of 12346 inhabitants of the Republic of Belarus from 118 regional samples studied by 18 autosomal microsatellites are included in the sample: 16 tetranucleotide STR (D2S1338, TPOX, D3S1358, CSF1PO, D5S818, D8S1179, D7S820, THO1, vWA, D13S317, D16S539, D18S51, D19S433, D21S11, F13B, and FGA) and two pentanucleotide STR (Penta D and Penta E). The samples studied are in Hardy–Weinberg equilibrium according to distribution of genotypes by 18 STR. Significant differences were not detected between discrete populations or between samples from various historical ethnographic regions of the Republic of Belarus (Western and Eastern Polesie, Podneprovye, Ponemanye, Poozerye, and Center), which indicates the absence of prominent genetic differentiation. Statistically significant differences between the studied genotypic datasets also were not detected, which made it possible to combine the datasets and consider the total sample as a unified forensic reference database for 18 “criminalistic” STR loci. Differences between reference database of the Republic of Belarus and Russians and Ukrainians by the distribution of the range of autosomal STR also were not detected, corresponding to a close genetic relationship of the three Eastern Slavic nations mediated by common origin and intense mutual migrations. Significant differences by separate STR loci between the reference database of Republic of Belarus and populations of Southern and Western Slavs were observed. The necessity of using original reference database for support of forensic expertise practice in the Republic of Belarus was demonstrated.
Slomka, P J; Elliott, E; Driedger, A A
2000-01-01
In nuclear medicine practice, images often need to be reviewed and reports prepared from locations outside the department, usually in the form of hard copy. Although hard-copy images are simple and portable, they do not offer electronic data search and image manipulation capabilities. On the other hand, picture archiving and communication systems or dedicated workstations cannot be easily deployed at numerous locations. To solve this problem, we propose a Java-based remote viewing station (JaRViS) for the reading and reporting of nuclear medicine images using Internet browser technology. JaRViS interfaces to the clinical patient database of a nuclear medicine workstation. All JaRViS software resides on a nuclear medicine department server. The contents of the clinical database can be searched by a browser interface after providing a password. Compressed images with the Java applet and color lookup tables are downloaded on the client side. This paradigm does not require nuclear medicine software to reside on remote computers, which simplifies support and deployment of such a system. To enable versatile reporting of the images, color tables and thresholds can be interactively manipulated and images can be displayed in a variety of layouts. Image filtering, frame grouping (adding frames), and movie display are available. Tomographic mode displays are supported, including gated SPECT. The time to display 14 lung perfusion images in 128 x 128 matrix together with the Java applet and color lookup tables over a V.90 modem is <1 min. SPECT and PET slice reorientation is interactive (<1 s). JaRViS could run on a Windows 95/98/NT or a Macintosh platform with Netscape Communicator or Microsoft Intemet Explorer. The performance of Java code for bilinear interpolation, cine display, and filtering approaches that of a standard imaging workstation. It is feasible to set up a remote nuclear medicine viewing station using Java and an Internet or intranet browser. Images can be made easily and cost-effectively available to referring physicians and ambulatory clinics within and outside of the hospital, providing a convenient alternative to film media. We also find this system useful in home reporting of emergency procedures such as lung ventilation-perfusion scans or dynamic studies.
Distributed data collection for a database of radiological image interpretations
NASA Astrophysics Data System (ADS)
Long, L. Rodney; Ostchega, Yechiam; Goh, Gin-Hua; Thoma, George R.
1997-01-01
The National Library of Medicine, in collaboration with the National Center for Health Statistics and the National Institute for Arthritis and Musculoskeletal and Skin Diseases, has built a system for collecting radiological interpretations for a large set of x-ray images acquired as part of the data gathered in the second National Health and Nutrition Examination Survey. This system is capable of delivering across the Internet 5- and 10-megabyte x-ray images to Sun workstations equipped with X Window based 2048 X 2560 image displays, for the purpose of having these images interpreted for the degree of presence of particular osteoarthritic conditions in the cervical and lumbar spines. The collected interpretations can then be stored in a database at the National Library of Medicine, under control of the Illustra DBMS. This system is a client/server database application which integrates (1) distributed server processing of client requests, (2) a customized image transmission method for faster Internet data delivery, (3) distributed client workstations with high resolution displays, image processing functions and an on-line digital atlas, and (4) relational database management of the collected data.
Digital data registration and differencing compression system
NASA Technical Reports Server (NTRS)
Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)
1990-01-01
A process is disclosed for x ray registration and differencing which results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.
Digital Data Registration and Differencing Compression System
NASA Technical Reports Server (NTRS)
Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)
1996-01-01
A process for X-ray registration and differencing results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic X-ray digital images.
Digital data registration and differencing compression system
NASA Technical Reports Server (NTRS)
Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)
1992-01-01
A process for x ray registration and differencing results in more efficient compression is discussed. Differencing of registered modeled subject image with a modeled reference image forms a differential image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three dimensional model, which three dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.
A set of 4D pediatric XCAT reference phantoms for multimodality research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Norris, Hannah, E-mail: Hannah.norris@duke.edu; Zhang, Yakun; Bond, Jason
Purpose: The authors previously developed an adult population of 4D extended cardiac-torso (XCAT) phantoms for multimodality imaging research. In this work, the authors develop a reference set of 4D pediatric XCAT phantoms consisting of male and female anatomies at ages of newborn, 1, 5, 10, and 15 years. These models will serve as the foundation from which the authors will create a vast population of pediatric phantoms for optimizing pediatric CT imaging protocols. Methods: Each phantom was based on a unique set of CT data from a normal patient obtained from the Duke University database. The datasets were selected tomore » best match the reference values for height and weight for the different ages and genders according to ICRP Publication 89. The major organs and structures were segmented from the CT data and used to create an initial pediatric model defined using nonuniform rational B-spline surfaces. The CT data covered the entire torso and part of the head. To complete the body, the authors manually added on the top of the head and the arms and legs using scaled versions of the XCAT adult models or additional models created from cadaver data. A multichannel large deformation diffeomorphic metric mapping algorithm was then used to calculate the transform from a template XCAT phantom (male or female 50th percentile adult) to the target pediatric model. The transform was applied to the template XCAT to fill in any unsegmented structures within the target phantom and to implement the 4D cardiac and respiratory models in the new anatomy. The masses of the organs in each phantom were matched to the reference values given in ICRP Publication 89. The new reference models were checked for anatomical accuracy via visual inspection. Results: The authors created a set of ten pediatric reference phantoms that have the same level of detail and functionality as the original XCAT phantom adults. Each consists of thousands of anatomical structures and includes parameterized models for the cardiac and respiratory motions. Based on patient data, the phantoms capture the anatomic variations of childhood, such as the development of bone in the skull, pelvis, and long bones, and the growth of the vertebrae and organs. The phantoms can be combined with existing simulation packages to generate realistic pediatric imaging data from different modalities. Conclusions: The development of patient-derived pediatric computational phantoms is useful in providing variable anatomies for simulation. Future work will expand this ten-phantom base to a host of pediatric phantoms representative of the public at large. This can provide a means to evaluate and improve pediatric imaging devices and to optimize CT protocols in terms of image quality and radiation dose.« less
Diagnostic reference levels in low- and middle-income countries: early "ALARAm" bells?
Meyer, Steven; Groenewald, Willem A; Pitcher, Richard D
2017-04-01
Background In 1996 the International Commission on Radiological Protection (ICRP) introduced diagnostic reference levels (DRLs) as a quality assurance tool for radiation dose optimization. While many countries have published DRLs, available data are largely from high-income countries. There is arguably a greater need for DRLs in low- and middle-income-countries (LMICs), where imaging equipment may be older and trained imaging technicians are scarce. To date, there has been no critical analysis of the published work on DRLs in LMICs. Such work is important to evaluate data deficiencies and stimulate future quality assurance initiatives. Purpose To review the published work on DRLs in LMICs and to critically analyze the comprehensiveness of available data. Material and Methods Medline, Scopus, and Web of Science database searches were conducted for English-language articles published between 1996 and 2015 documenting DRLs for diagnostic imaging in LMICs. Retrieved articles were analyzed and classified by geographical region, country of origin, contributing author, year of publication, imaging modality, body part, and patient age. Results Fifty-three articles reported DRLs for 28 of 135 LMICs (21%), reflecting data from 26/104 (25%) middle-income countries and 2/31 (6%) low-income countries. General radiography (n = 26, 49%) and computerized tomography (n = 17, 32%) data were most commonly reported. Pediatric DRLs (n = 14, 26%) constituted approximately one-quarter of published work. Conclusion Published DRL data are deficient in the majority of LMICs, with the paucity most striking in low-income countries. DRL initiatives are required in LMICs to enhance dose optimization.
BODY IMAGE IN CHILDHOOD: AN INTEGRATIVE LITERATURE REVIEW
Neves, Clara Mockdece; Cipriani, Flávia Marcelle; Meireles, Juliana Fernandes Filgueiras; Morgado, Fabiane Frota da Rocha; Ferreira, Maria Elisa Caputo
2017-01-01
ABSTRACT Objective: To analyse the scientific literature regarding the evaluation of body image in children through an integrative literature review. Data source: An intersection of the keywords “body image” AND “child” was conducted in Scopus, Medline and Virtual Health Library (BVS - Biblioteca Virtual de Saúde) databases. The electronic search was based on studies published from January 2013 to January 2016, in order to verify the most current investigations on the subject. Exclusion criteria were: articles in duplicate; no available summaries; not empirical; not assessing any component of body image; the sample did not consider the target age of this research (0 to 12 years old) and/or considered clinical populations; besides articles not fully available. Data synthesis: 7,681 references were identified, and, after the exclusion criteria were implemented, 33 studies were analysed. Results showed that the perceptual and attitudinal dimensions focusing on body dissatisfaction were explored, mainly evaluated by silhouette scales. Intervention programs were developed internationally to prevent negative body image in children. Conclusions: The studies included in this review evaluated specific aspects of body image in children, especially body perception and body dissatisfaction. The creation of specific tools for children to evaluate body image is recommended to promote the psychosocial well being of individuals throughout human development. PMID:28977287
NASA Astrophysics Data System (ADS)
Uchiyama, Yoshikazu; Asano, Tatsunori; Hara, Takeshi; Fujita, Hiroshi; Kinosada, Yasutomi; Asano, Takahiko; Kato, Hiroki; Kanematsu, Masayuki; Hoshi, Hiroaki; Iwama, Toru
2009-02-01
The detection of cerebrovascular diseases such as unruptured aneurysm, stenosis, and occlusion is a major application of magnetic resonance angiography (MRA). However, their accurate detection is often difficult for radiologists. Therefore, several computer-aided diagnosis (CAD) schemes have been developed in order to assist radiologists with image interpretation. The purpose of this study was to develop a computerized method for segmenting cerebral arteries, which is an essential component of CAD schemes. For the segmentation of vessel regions, we first used a gray level transformation to calibrate voxel values. To adjust for variations in the positioning of patients, registration was subsequently employed to maximize the overlapping of the vessel regions in the target image and reference image. The vessel regions were then segmented from the background using gray-level thresholding and region growing techniques. Finally, rule-based schemes with features such as size, shape, and anatomical location were employed to distinguish between vessel regions and false positives. Our method was applied to 854 clinical cases obtained from two different hospitals. The segmentation of cerebral arteries in 97.1%(829/854) of the MRA studies was attained as an acceptable result. Therefore, our computerized method would be useful in CAD schemes for the detection of cerebrovascular diseases in MRA images.
Harford, Mirae; Catherall, Jacqueline; Gerry, Stephen; Young, Duncan; Watkinson, Peter
2017-10-25
For many vital signs, monitoring methods require contact with the patient and/or are invasive in nature. There is increasing interest in developing still and video image-guided monitoring methods that are non-contact and non-invasive. We will undertake a systematic review of still and video image-based monitoring methods. We will perform searches in multiple databases which include MEDLINE, Embase, CINAHL, Cochrane library, IEEE Xplore and ACM Digital Library. We will use OpenGrey and Google searches to access unpublished or commercial data. We will not use language or publication date restrictions. The primary goal is to summarise current image-based vital signs monitoring methods, limited to heart rate, respiratory rate, oxygen saturations and blood pressure. Of particular interest will be the effectiveness of image-based methods compared to reference devices. Other outcomes of interest include the quality of the method comparison studies with respect to published reporting guidelines, any limitations of non-contact non-invasive technology and application in different populations. To the best of our knowledge, this is the first systematic review of image-based non-contact methods of vital signs monitoring. Synthesis of currently available technology will facilitate future research in this highly topical area. PROSPERO CRD42016029167.
Cardiac phase detection in intravascular ultrasound images
NASA Astrophysics Data System (ADS)
Matsumoto, Monica M. S.; Lemos, Pedro Alves; Yoneyama, Takashi; Furuie, Sergio Shiguemi
2008-03-01
Image gating is related to image modalities that involve quasi-periodic moving organs. Therefore, during intravascular ultrasound (IVUS) examination, there is cardiac movement interference. In this paper, we aim to obtain IVUS gated images based on the images themselves. This would allow the reconstruction of 3D coronaries with temporal accuracy for any cardiac phase, which is an advantage over the ECG-gated acquisition that shows a single one. It is also important for retrospective studies, as in existing IVUS databases there are no additional reference signals (ECG). From the images, we calculated signals based on average intensity (AI), and, from consecutive frames, average intensity difference (AID), cross-correlation coefficient (CC) and mutual information (MI). The process includes a wavelet-based filter step and ascendant zero-cross detection in order to obtain the phase information. Firstly, we tested 90 simulated sequences with 1025 frames each. Our method was able to achieve more than 95.0% of true positives and less than 2.3% of false positives ratio, for all signals. Afterwards, we tested in a real examination, with 897 frames and ECG as gold-standard. We achieved 97.4% of true positives (CC and MI), and 2.5% of false positives. For future works, methodology should be tested in wider range of IVUS examinations.
The role of body image and self-perception in anorexia nervosa: the neuroimaging perspective.
Esposito, Roberto; Cieri, Filippo; di Giannantonio, Massimo; Tartaro, Armando
2018-03-01
Anorexia nervosa is a severe psychiatric illness characterized by intense fear of gaining weight, relentless pursuit of thinness, deep concerns about food and a pervasive disturbance of body image. Functional magnetic resonance imaging tries to shed light on the neurobiological underpinnings of anorexia nervosa. This review aims to evaluate the empirical neuroimaging literature about self-perception in anorexia nervosa. This narrative review summarizes a number of task-based and resting-state functional magnetic resonance imaging studies in anorexia nervosa about body image and self-perception. The articles listed in references were searched using electronic databases (PubMed and Google Scholar) from 1990 to February 2016 using specific key words. All studies were reviewed with regard to their quality and eligibility for the review. Differences in brain activity were observed using body image perception and body size estimation tasks showing significant modifications in activity of specific brain areas (extrastriate body area, fusiform body area, inferior parietal lobule). Recent studies highlighted the role of emotions and self-perception in anorexia nervosa and their neural substrate involving resting-state networks and particularly frontal and posterior midline cortical structures within default mode network and insula. These findings open new horizons to understand the neural substrate of anorexia nervosa. © 2016 The British Psychological Society.
Using All-Sky Imaging to Improve Telescope Scheduling (Abstract)
NASA Astrophysics Data System (ADS)
Cole, G. M.
2017-12-01
(Abstract only) Automated scheduling makes it possible for a small telescope to observe a large number of targets in a single night. But when used in areas which have less-than-perfect sky conditions such automation can lead to large numbers of observations of clouds and haze. This paper describes the development of a "sky-aware" telescope automation system that integrates the data flow from an SBIG AllSky340c camera with an enhanced dispatch scheduler to make optimum use of the available observing conditions for two highly instrumented backyard telescopes. Using the minute-by-minute time series image stream and a self-maintained reference database, the software maintains a file of sky brightness, transparency, stability, and forecasted visibility at several hundred grid positions. The scheduling software uses this information in real time to exclude targets obscured by clouds and select the best observing task, taking into account the requirements and limits of each instrument.
Intelligent distributed medical image management
NASA Astrophysics Data System (ADS)
Garcia, Hong-Mei C.; Yun, David Y.
1995-05-01
The rapid advancements in high performance global communication have accelerated cooperative image-based medical services to a new frontier. Traditional image-based medical services such as radiology and diagnostic consultation can now fully utilize multimedia technologies in order to provide novel services, including remote cooperative medical triage, distributed virtual simulation of operations, as well as cross-country collaborative medical research and training. Fast (efficient) and easy (flexible) retrieval of relevant images remains a critical requirement for the provision of remote medical services. This paper describes the database system requirements, identifies technological building blocks for meeting the requirements, and presents a system architecture for our target image database system, MISSION-DBS, which has been designed to fulfill the goals of Project MISSION (medical imaging support via satellite integrated optical network) -- an experimental high performance gigabit satellite communication network with access to remote supercomputing power, medical image databases, and 3D visualization capabilities in addition to medical expertise anywhere and anytime around the country. The MISSION-DBS design employs a synergistic fusion of techniques in distributed databases (DDB) and artificial intelligence (AI) for storing, migrating, accessing, and exploring images. The efficient storage and retrieval of voluminous image information is achieved by integrating DDB modeling and AI techniques for image processing while the flexible retrieval mechanisms are accomplished by combining attribute- based and content-based retrievals.
DIGITAL CARTOGRAPHY OF THE PLANETS: NEW METHODS, ITS STATUS, AND ITS FUTURE.
Batson, R.M.
1987-01-01
A system has been developed that establishes a standardized cartographic database for each of the 19 planets and major satellites that have been explored to date. Compilation of the databases involves both traditional and newly developed digital image processing and mosaicking techniques, including radiometric and geometric corrections of the images. Each database, or digital image model (DIM), is a digital mosaic of spacecraft images that have been radiometrically and geometrically corrected and photometrically modeled. During compilation, ancillary data files such as radiometric calibrations and refined photometric values for all camera lens and filter combinations and refined camera-orientation matrices for all images used in the mapping are produced.
Body-wide anatomy recognition in PET/CT images
NASA Astrophysics Data System (ADS)
Wang, Huiqian; Udupa, Jayaram K.; Odhner, Dewey; Tong, Yubing; Zhao, Liming; Torigian, Drew A.
2015-03-01
With the rapid growth of positron emission tomography/computed tomography (PET/CT)-based medical applications, body-wide anatomy recognition on whole-body PET/CT images becomes crucial for quantifying body-wide disease burden. This, however, is a challenging problem and seldom studied due to unclear anatomy reference frame and low spatial resolution of PET images as well as low contrast and spatial resolution of the associated low-dose CT images. We previously developed an automatic anatomy recognition (AAR) system [15] whose applicability was demonstrated on diagnostic computed tomography (CT) and magnetic resonance (MR) images in different body regions on 35 objects. The aim of the present work is to investigate strategies for adapting the previous AAR system to low-dose CT and PET images toward automated body-wide disease quantification. Our adaptation of the previous AAR methodology to PET/CT images in this paper focuses on 16 objects in three body regions - thorax, abdomen, and pelvis - and consists of the following steps: collecting whole-body PET/CT images from existing patient image databases, delineating all objects in these images, modifying the previous hierarchical models built from diagnostic CT images to account for differences in appearance in low-dose CT and PET images, automatically locating objects in these images following object hierarchy, and evaluating performance. Our preliminary evaluations indicate that the performance of the AAR approach on low-dose CT images achieves object localization accuracy within about 2 voxels, which is comparable to the accuracies achieved on diagnostic contrast-enhanced CT images. Object recognition on low-dose CT images from PET/CT examinations without requiring diagnostic contrast-enhanced CT seems feasible.
Hadlich, Marcelo Souza; Oliveira, Gláucia Maria Moraes; Feijóo, Raúl A; Azevedo, Clerio F; Tura, Bernardo Rangel; Ziemer, Paulo Gustavo Portela; Blanco, Pablo Javier; Pina, Gustavo; Meira, Márcio; Souza e Silva, Nelson Albuquerque de
2012-10-01
The standardization of images used in Medicine in 1993 was performed using the DICOM (Digital Imaging and Communications in Medicine) standard. Several tests use this standard and it is increasingly necessary to design software applications capable of handling this type of image; however, these software applications are not usually free and open-source, and this fact hinders their adjustment to most diverse interests. To develop and validate a free and open-source software application capable of handling DICOM coronary computed tomography angiography images. We developed and tested the ImageLab software in the evaluation of 100 tests randomly selected from a database. We carried out 600 tests divided between two observers using ImageLab and another software sold with Philips Brilliance computed tomography appliances in the evaluation of coronary lesions and plaques around the left main coronary artery (LMCA) and the anterior descending artery (ADA). To evaluate intraobserver, interobserver and intersoftware agreements, we used simple and kappa statistics agreements. The agreements observed between software applications were generally classified as substantial or almost perfect in most comparisons. The ImageLab software agreed with the Philips software in the evaluation of coronary computed tomography angiography tests, especially in patients without lesions, with lesions < 50% in the LMCA and < 70% in the ADA. The agreement for lesions > 70% in the ADA was lower, but this is also observed when the anatomical reference standard is used.
Karnan, M; Thangavel, K
2007-07-01
The presence of microcalcifications in breast tissue is one of the most incident signs considered by radiologist for an early diagnosis of breast cancer, which is one of the most common forms of cancer among women. In this paper, the Genetic Algorithm (GA) is proposed for automatic look at commonly prone area the breast border and nipple position to discover the suspicious regions on digital mammograms based on asymmetries between left and right breast image. The basic idea of the asymmetry approach is to scan left and right images are subtracted to extract the suspicious region. The proposed system consists of two steps: First, the mammogram images are enhanced using median filter, normalize the image, at the pectoral muscle region is excluding the border of the mammogram and comparing for both left and right images from the binary image. Further GA is applied to magnify the detected border. The figure of merit is calculated to evaluate whether the detected border is exact or not. And the nipple position is identified using GA. The some comparisons method is adopted for detection of suspected area. Second, using the border points and nipple position as the reference the mammogram images are aligned and subtracted to extract the suspicious region. The algorithms are tested on 114 abnormal digitized mammograms from Mammogram Image Analysis Society database.
Diet History Questionnaire: Database Revision History
The following details all additions and revisions made to the DHQ nutrient and food database. This revision history is provided as a reference for investigators who may have performed analyses with a previous release of the database.
A Partnership for Public Health: USDA Branded Food Products Database
USDA-ARS?s Scientific Manuscript database
The importance of comprehensive food composition databases is more critical than ever in helping to address global food security. The USDA National Nutrient Database for Standard Reference is the “gold standard” for food composition databases. The presentation will include new developments in stren...
A mathematical model of neuro-fuzzy approximation in image classification
NASA Astrophysics Data System (ADS)
Gopalan, Sasi; Pinto, Linu; Sheela, C.; Arun Kumar M., N.
2016-06-01
Image digitization and explosion of World Wide Web has made traditional search for image, an inefficient method for retrieval of required grassland image data from large database. For a given input query image Content-Based Image Retrieval (CBIR) system retrieves the similar images from a large database. Advances in technology has increased the use of grassland image data in diverse areas such has agriculture, art galleries, education, industry etc. In all the above mentioned diverse areas it is necessary to retrieve grassland image data efficiently from a large database to perform an assigned task and to make a suitable decision. A CBIR system based on grassland image properties and it uses the aid of a feed-forward back propagation neural network for an effective image retrieval is proposed in this paper. Fuzzy Memberships plays an important role in the input space of the proposed system which leads to a combined neural fuzzy approximation in image classification. The CBIR system with mathematical model in the proposed work gives more clarity about fuzzy-neuro approximation and the convergence of the image features in a grassland image.
A database system to support image algorithm evaluation
NASA Technical Reports Server (NTRS)
Lien, Y. E.
1977-01-01
The design is given of an interactive image database system IMDB, which allows the user to create, retrieve, store, display, and manipulate images through the facility of a high-level, interactive image query (IQ) language. The query language IQ permits the user to define false color functions, pixel value transformations, overlay functions, zoom functions, and windows. The user manipulates the images through generic functions. The user can direct images to display devices for visual and qualitative analysis. Image histograms and pixel value distributions can also be computed to obtain a quantitative analysis of images.
Analysis of the NMI01 marker for a population database of cannabis seeds.
Shirley, Nicholas; Allgeier, Lindsay; Lanier, Tommy; Coyle, Heather Miller
2013-01-01
We have analyzed the distribution of genotypes at a single hexanucleotide short tandem repeat (STR) locus in a Cannabis sativa seed database along with seed-packaging information. This STR locus is defined by the polymerase chain reaction amplification primers CS1F and CS1R and is referred to as NMI01 (for National Marijuana Initiative) in our study. The population database consists of seed seizures of two categories: seed samples from labeled and unlabeled packages regarding seed bank source. Of a population database of 93 processed seeds including 12 labeled Cannabis varieties, the observed genotypes generated from single seeds exhibited between one and three peaks (potentially six alleles if in homozygous state). The total number of observed genotypes was 54 making this marker highly specific and highly individualizing even among seeds of common lineage. Cluster analysis associated many but not all of the handwritten labeled seed varieties tested to date as well as the National Park seizure to our known reference database containing Mr. Nice Seedbank and Sensi Seeds commercially packaged reference samples. © 2012 American Academy of Forensic Sciences.
The EpiSLI Database: A Publicly Available Database on Speech and Language
ERIC Educational Resources Information Center
Tomblin, J. Bruce
2010-01-01
Purpose: This article describes a database that was created in the process of conducting a large-scale epidemiologic study of specific language impairment (SLI). As such, this database will be referred to as the EpiSLI database. Children with SLI have unexpected and unexplained difficulties learning and using spoken language. Although there is no…
USDA-ARS?s Scientific Manuscript database
USDA National Nutrient Database for Standard Reference Dataset for What We Eat In America, NHANES (Survey-SR) provides the nutrient data for assessing dietary intakes from the national survey What We Eat In America, National Health and Nutrition Examination Survey (WWEIA, NHANES). The current versi...
Image ratio features for facial expression recognition application.
Song, Mingli; Tao, Dacheng; Liu, Zicheng; Li, Xuelong; Zhou, Mengchu
2010-06-01
Video-based facial expression recognition is a challenging problem in computer vision and human-computer interaction. To target this problem, texture features have been extracted and widely used, because they can capture image intensity changes raised by skin deformation. However, existing texture features encounter problems with albedo and lighting variations. To solve both problems, we propose a new texture feature called image ratio features. Compared with previously proposed texture features, e.g., high gradient component features, image ratio features are more robust to albedo and lighting variations. In addition, to further improve facial expression recognition accuracy based on image ratio features, we combine image ratio features with facial animation parameters (FAPs), which describe the geometric motions of facial feature points. The performance evaluation is based on the Carnegie Mellon University Cohn-Kanade database, our own database, and the Japanese Female Facial Expression database. Experimental results show that the proposed image ratio feature is more robust to albedo and lighting variations, and the combination of image ratio features and FAPs outperforms each feature alone. In addition, we study asymmetric facial expressions based on our own facial expression database and demonstrate the superior performance of our combined expression recognition system.
Recent Developments in Cultural Heritage Image Databases: Directions for User-Centered Design.
ERIC Educational Resources Information Center
Stephenson, Christie
1999-01-01
Examines the Museum Educational Site Licensing (MESL) Project--a cooperative project between seven cultural heritage repositories and seven universities--as well as other developments of cultural heritage image databases for academic use. Reviews recent literature on image indexing and retrieval, interface design, and tool development, urging a…
EROS Main Image File: A Picture Perfect Database for Landsat Imagery and Aerial Photography.
ERIC Educational Resources Information Center
Jack, Robert F.
1984-01-01
Describes Earth Resources Observation System online database, which provides access to computerized images of Earth obtained via satellite. Highlights include retrieval system and commands, types of images, search strategies, other online functions, and interpretation of accessions. Satellite information, sources and samples of accessions, and…
Gradishar, William; Johnson, KariAnne; Brown, Krystal; Mundt, Erin; Manley, Susan
2017-07-01
There is a growing move to consult public databases following receipt of a genetic test result from a clinical laboratory; however, the well-documented limitations of these databases call into question how often clinicians will encounter discordant variant classifications that may introduce uncertainty into patient management. Here, we evaluate discordance in BRCA1 and BRCA2 variant classifications between a single commercial testing laboratory and a public database commonly consulted in clinical practice. BRCA1 and BRCA2 variant classifications were obtained from ClinVar and compared with the classifications from a reference laboratory. Full concordance and discordance were determined for variants whose ClinVar entries were of the same pathogenicity (pathogenic, benign, or uncertain). Variants with conflicting ClinVar classifications were considered partially concordant if ≥1 of the listed classifications agreed with the reference laboratory classification. Four thousand two hundred and fifty unique BRCA1 and BRCA2 variants were available for analysis. Overall, 73.2% of classifications were fully concordant and 12.3% were partially concordant. The remaining 14.5% of variants had discordant classifications, most of which had a definitive classification (pathogenic or benign) from the reference laboratory compared with an uncertain classification in ClinVar (14.0%). Here, we show that discrepant classifications between a public database and single reference laboratory potentially account for 26.7% of variants in BRCA1 and BRCA2 . The time and expertise required of clinicians to research these discordant classifications call into question the practicality of checking all test results against a database and suggest that discordant classifications should be interpreted with these limitations in mind. With the increasing use of clinical genetic testing for hereditary cancer risk, accurate variant classification is vital to ensuring appropriate medical management. There is a growing move to consult public databases following receipt of a genetic test result from a clinical laboratory; however, we show that up to 26.7% of variants in BRCA1 and BRCA2 have discordant classifications between ClinVar and a reference laboratory. The findings presented in this paper serve as a note of caution regarding the utility of database consultation. © AlphaMed Press 2017.
Pardo-Hernandez, Hector; Urrútia, Gerard; Barajas-Nava, Leticia A; Buitrago-Garcia, Diana; Garzón, Julieth Vanessa; Martínez-Zapata, María José; Bonfill, Xavier
2017-06-13
Systematic reviews provide the best evidence on the effect of health care interventions. They rely on comprehensive access to the available scientific literature. Electronic search strategies alone may not suffice, requiring the implementation of a handsearching approach. We have developed a database to provide an Internet-based platform from which handsearching activities can be coordinated, including a procedure to streamline the submission of these references into CENTRAL, the Cochrane Collaboration Central Register of Controlled Trials. We developed a database and a descriptive analysis. Through brainstorming and discussion among stakeholders involved in handsearching projects, we designed a database that met identified needs that had to be addressed in order to ensure the viability of handsearching activities. Three handsearching teams pilot tested the proposed database. Once the final version of the database was approved, we proceeded to train the staff involved in handsearching. The proposed database is called BADERI (Database of Iberoamerican Clinical Trials and Journals, by its initials in Spanish). BADERI was officially launched in October 2015, and it can be accessed at www.baderi.com/login.php free of cost. BADERI has an administration subsection, from which the roles of users are managed; a references subsection, where information associated to identified controlled clinical trials (CCTs) can be entered; a reports subsection, from which reports can be generated to track and analyse the results of handsearching activities; and a built-in free text search engine. BADERI allows all references to be exported in ProCite files that can be directly uploaded into CENTRAL. To date, 6284 references to CCTs have been uploaded to BADERI and sent to CENTRAL. The identified CCTs were published in a total of 420 journals related to 46 medical specialties. The year of publication ranged between 1957 and 2016. BADERI allows the efficient management of handsearching activities across different countries and institutions. References to all CCTs available in BADERI can be readily submitted to CENTRAL for their potential inclusion in systematic reviews.
A high-performance spatial database based approach for pathology imaging algorithm evaluation
Wang, Fusheng; Kong, Jun; Gao, Jingjing; Cooper, Lee A.D.; Kurc, Tahsin; Zhou, Zhengwen; Adler, David; Vergara-Niedermayr, Cristobal; Katigbak, Bryan; Brat, Daniel J.; Saltz, Joel H.
2013-01-01
Background: Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. Context: The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS) data model. Aims: (1) Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2) Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3) Develop a set of queries to support data sampling and result comparisons; (4) Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. Materials and Methods: We have considered two scenarios for algorithm evaluation: (1) algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2) algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The validated data were formatted based on the PAIS data model and loaded into a spatial database. To support efficient data loading, we have implemented a parallel data loading tool that takes advantage of multi-core CPUs to accelerate data injection. The spatial database manages both geometric shapes and image features or classifications, and enables spatial sampling, result comparison, and result aggregation through expressive structured query language (SQL) queries with spatial extensions. To provide scalable and efficient query support, we have employed a shared nothing parallel database architecture, which distributes data homogenously across multiple database partitions to take advantage of parallel computation power and implements spatial indexing to achieve high I/O throughput. Results: Our work proposes a high performance, parallel spatial database platform for algorithm validation and comparison. This platform was evaluated by storing, managing, and comparing analysis results from a set of brain tumor whole slide images. The tools we develop are open source and available to download. Conclusions: Pathology image algorithm validation and comparison are essential to iterative algorithm development and refinement. One critical component is the support for queries involving spatial predicates and comparisons. In our work, we develop an efficient data model and parallel database approach to model, normalize, manage and query large volumes of analytical image result data. Our experiments demonstrate that the data partitioning strategy and the grid-based indexing result in good data distribution across database nodes and reduce I/O overhead in spatial join queries through parallel retrieval of relevant data and quick subsetting of datasets. The set of tools in the framework provide a full pipeline to normalize, load, manage and query analytical results for algorithm evaluation. PMID:23599905
Multimedia explorer: image database, image proxy-server and search-engine.
Frankewitsch, T.; Prokosch, U.
1999-01-01
Multimedia plays a major role in medicine. Databases containing images, movies or other types of multimedia objects are increasing in number, especially on the WWW. However, no good retrieval mechanism or search engine currently exists to efficiently track down such multimedia sources in the vast of information provided by the WWW. Secondly, the tools for searching databases are usually not adapted to the properties of images. HTML pages do not allow complex searches. Therefore establishing a more comfortable retrieval involves the use of a higher programming level like JAVA. With this platform independent language it is possible to create extensions to commonly used web browsers. These applets offer a graphical user interface for high level navigation. We implemented a database using JAVA objects as the primary storage container which are then stored by a JAVA controlled ORACLE8 database. Navigation depends on a structured vocabulary enhanced by a semantic network. With this approach multimedia objects can be encapsulated within a logical module for quick data retrieval. PMID:10566463
Multimedia explorer: image database, image proxy-server and search-engine.
Frankewitsch, T; Prokosch, U
1999-01-01
Multimedia plays a major role in medicine. Databases containing images, movies or other types of multimedia objects are increasing in number, especially on the WWW. However, no good retrieval mechanism or search engine currently exists to efficiently track down such multimedia sources in the vast of information provided by the WWW. Secondly, the tools for searching databases are usually not adapted to the properties of images. HTML pages do not allow complex searches. Therefore establishing a more comfortable retrieval involves the use of a higher programming level like JAVA. With this platform independent language it is possible to create extensions to commonly used web browsers. These applets offer a graphical user interface for high level navigation. We implemented a database using JAVA objects as the primary storage container which are then stored by a JAVA controlled ORACLE8 database. Navigation depends on a structured vocabulary enhanced by a semantic network. With this approach multimedia objects can be encapsulated within a logical module for quick data retrieval.
Carazo, J M; Stelzer, E H
1999-01-01
The BioImage Database Project collects and structures multidimensional data sets recorded by various microscopic techniques relevant to modern life sciences. It provides, as precisely as possible, the circumstances in which the sample was prepared and the data were recorded. It grants access to the actual data and maintains links between related data sets. In order to promote the interdisciplinary approach of modern science, it offers a large set of key words, which covers essentially all aspects of microscopy. Nonspecialists can, therefore, access and retrieve significant information recorded and submitted by specialists in other areas. A key issue of the undertaking is to exploit the available technology and to provide a well-defined yet flexible structure for dealing with data. Its pivotal element is, therefore, a modern object relational database that structures the metadata and ameliorates the provision of a complete service. The BioImage database can be accessed through the Internet. Copyright 1999 Academic Press.
Vitamin and Mineral Supplement Fact Sheets
... Dictionary of Dietary Supplement Terms Dietary Supplement Label Database (DSLD) Información en español Consumer information in Spanish ... Analytical Methods and Reference Materials Dietary Supplement Label Database (DSLD) Dietary Supplement Ingredient Database (DSID) Computer Access ...
An unsupervised approach for measuring myocardial perfusion in MR image sequences
NASA Astrophysics Data System (ADS)
Discher, Antoine; Rougon, Nicolas; Preteux, Francoise
2005-08-01
Quantitatively assessing myocardial perfusion is a key issue for the diagnosis, therapeutic planning and patient follow-up of cardio-vascular diseases. To this end, perfusion MRI (p-MRI) has emerged as a valuable clinical investigation tool thanks to its ability of dynamically imaging the first pass of a contrast bolus in the framework of stress/rest exams. However, reliable techniques for automatically computing regional first pass curves from 2D short-axis cardiac p-MRI sequences remain to be elaborated. We address this problem and develop an unsupervised four-step approach comprising: (i) a coarse spatio-temporal segmentation step, allowing to automatically detect a region of interest for the heart over the whole sequence, and to select a reference frame with maximal myocardium contrast; (ii) a model-based variational segmentation step of the reference frame, yielding a bi-ventricular partition of the heart into left ventricle, right ventricle and myocardium components; (iii) a respiratory/cardiac motion artifacts compensation step using a novel region-driven intensity-based non rigid registration technique, allowing to elastically propagate the reference bi-ventricular segmentation over the whole sequence; (iv) a measurement step, delivering first-pass curves over each region of a segmental model of the myocardium. The performance of this approach is assessed over a database of 15 normal and pathological subjects, and compared with perfusion measurements delivered by a MRI manufacturer software package based on manual delineations by a medical expert.
A novel method for efficient archiving and retrieval of biomedical images using MPEG-7
NASA Astrophysics Data System (ADS)
Meyer, Joerg; Pahwa, Ash
2004-10-01
Digital archiving and efficient retrieval of radiological scans have become critical steps in contemporary medical diagnostics. Since more and more images and image sequences (single scans or video) from various modalities (CT/MRI/PET/digital X-ray) are now available in digital formats (e.g., DICOM-3), hospitals and radiology clinics need to implement efficient protocols capable of managing the enormous amounts of data generated daily in a typical clinical routine. We present a method that appears to be a viable way to eliminate the tedious step of manually annotating image and video material for database indexing. MPEG-7 is a new framework that standardizes the way images are characterized in terms of color, shape, and other abstract, content-related criteria. A set of standardized descriptors that are automatically generated from an image is used to compare an image to other images in a database, and to compute the distance between two images for a given application domain. Text-based database queries can be replaced with image-based queries using MPEG-7. Consequently, image queries can be conducted without any prior knowledge of the keys that were used as indices in the database. Since the decoding and matching steps are not part of the MPEG-7 standard, this method also enables searches that were not planned by the time the keys were generated.
Zhang, Peifen; Dreher, Kate; Karthikeyan, A.; Chi, Anjo; Pujar, Anuradha; Caspi, Ron; Karp, Peter; Kirkup, Vanessa; Latendresse, Mario; Lee, Cynthia; Mueller, Lukas A.; Muller, Robert; Rhee, Seung Yon
2010-01-01
Metabolic networks reconstructed from sequenced genomes or transcriptomes can help visualize and analyze large-scale experimental data, predict metabolic phenotypes, discover enzymes, engineer metabolic pathways, and study metabolic pathway evolution. We developed a general approach for reconstructing metabolic pathway complements of plant genomes. Two new reference databases were created and added to the core of the infrastructure: a comprehensive, all-plant reference pathway database, PlantCyc, and a reference enzyme sequence database, RESD, for annotating metabolic functions of protein sequences. PlantCyc (version 3.0) includes 714 metabolic pathways and 2,619 reactions from over 300 species. RESD (version 1.0) contains 14,187 literature-supported enzyme sequences from across all kingdoms. We used RESD, PlantCyc, and MetaCyc (an all-species reference metabolic pathway database), in conjunction with the pathway prediction software Pathway Tools, to reconstruct a metabolic pathway database, PoplarCyc, from the recently sequenced genome of Populus trichocarpa. PoplarCyc (version 1.0) contains 321 pathways with 1,807 assigned enzymes. Comparing PoplarCyc (version 1.0) with AraCyc (version 6.0, Arabidopsis [Arabidopsis thaliana]) showed comparable numbers of pathways distributed across all domains of metabolism in both databases, except for a higher number of AraCyc pathways in secondary metabolism and a 1.5-fold increase in carbohydrate metabolic enzymes in PoplarCyc. Here, we introduce these new resources and demonstrate the feasibility of using them to identify candidate enzymes for specific pathways and to analyze metabolite profiling data through concrete examples. These resources can be searched by text or BLAST, browsed, and downloaded from our project Web site (http://plantcyc.org). PMID:20522724
Recent Advances and Coming Attractions in the NASA/IPAC Extragalactic Database
NASA Astrophysics Data System (ADS)
Mazzarella, Joseph M.; Baker, Kay; Pan Chan, Hiu; Chen, Xi; Ebert, Rick; Frayer, Cren; Helou, George; Jacobson, Jeffery D.; Lo, Tak M.; Madore, Barry; Ogle, Patrick M.; Pevunova, Olga; Steer, Ian; Schmitz, Marion; Terek, Scott
2017-01-01
We review highlights of recent advances and developments underway at the NASA/IPAC Extragalactic Database (NED). Extensive updates have been made to the infrastructure and processes essential for scaling NED for the next steps in its evolution. A major overhaul of the data integration pipeline provides greater modularity and parallelization to increase the rate of source cross-matching and data integration. The new pipeline was used recently to fold in data for nearly 300,000 sources published in over 900 recent journal articles, as well as fundamental parameters for 42 million sources in the Spitzer Enhanced Imaging Products Source List. The latter has added over 360 million photometric measurements at 3.6, 4.5, 5.8. 8.0 (IRAC) and 24 microns (MIPS) to the spectral energy distributions of affected objects in NED. The recent discovery of super-luminous spiral galaxies (Ogle et al. 2016) exemplifies the opportunities for science discovery and data mining available directly from NED’s unique data synthesis, spanning the spectrum from gamma ray through radio frequencies. The number of references in NED has surpassed 103,000. In the coming year, cross-identifications of sources in the 2MASS Point Source Catalog and in the AllWISE Source Catalog with prior objects in the database (including GALEX) will increase the holdings to over a billion distinct objects, providing a rich resource for multi-wavelength analysis. Information about a recent surge in growth of redshift-independent distances in NED is presented at this meeting by Steer et al. (2017). Website updates include a ’simple search’ to perform common queries in a single entry field, an interface to query the image repository with options to sort and filter the initial results, connectivity to the IRSA Finder Chart service, as well as a program interface to query images using the international virtual observatory Simple Image Access protocol. Graphical characterizations of NED content and completeness are being further developed. A brief summary of new science functionality under development is also given. NED is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
Design and implementation of a fault-tolerant and dynamic metadata database for clinical trials
NASA Astrophysics Data System (ADS)
Lee, J.; Zhou, Z.; Talini, E.; Documet, J.; Liu, B.
2007-03-01
In recent imaging-based clinical trials, quantitative image analysis (QIA) and computer-aided diagnosis (CAD) methods are increasing in productivity due to higher resolution imaging capabilities. A radiology core doing clinical trials have been analyzing more treatment methods and there is a growing quantity of metadata that need to be stored and managed. These radiology centers are also collaborating with many off-site imaging field sites and need a way to communicate metadata between one another in a secure infrastructure. Our solution is to implement a data storage grid with a fault-tolerant and dynamic metadata database design to unify metadata from different clinical trial experiments and field sites. Although metadata from images follow the DICOM standard, clinical trials also produce metadata specific to regions-of-interest and quantitative image analysis. We have implemented a data access and integration (DAI) server layer where multiple field sites can access multiple metadata databases in the data grid through a single web-based grid service. The centralization of metadata database management simplifies the task of adding new databases into the grid and also decreases the risk of configuration errors seen in peer-to-peer grids. In this paper, we address the design and implementation of a data grid metadata storage that has fault-tolerance and dynamic integration for imaging-based clinical trials.
Sehgal, Manika; Singh, Tiratha Raj
2014-04-01
We present DR-GAS(1), a unique, consolidated and comprehensive DNA repair genetic association studies database of human DNA repair system. It presents information on repair genes, assorted mechanisms of DNA repair, linkage disequilibrium, haplotype blocks, nsSNPs, phosphorylation sites, associated diseases, and pathways involved in repair systems. DNA repair is an intricate process which plays an essential role in maintaining the integrity of the genome by eradicating the damaging effect of internal and external changes in the genome. Hence, it is crucial to extensively understand the intact process of DNA repair, genes involved, non-synonymous SNPs which perhaps affect the function, phosphorylated residues and other related genetic parameters. All the corresponding entries for DNA repair genes, such as proteins, OMIM IDs, literature references and pathways are cross-referenced to their respective primary databases. DNA repair genes and their associated parameters are either represented in tabular or in graphical form through images elucidated by computational and statistical analyses. It is believed that the database will assist molecular biologists, biotechnologists, therapeutic developers and other scientific community to encounter biologically meaningful information, and meticulous contribution of genetic level information towards treacherous diseases in human DNA repair systems. DR-GAS is freely available for academic and research purposes at: http://www.bioinfoindia.org/drgas. Copyright © 2014 Elsevier B.V. All rights reserved.
Menon, K Venugopal; Kumar, Dinesh; Thomas, Tessamma
2014-02-01
Study Design Preliminary evaluation of new tool. Objective To ascertain whether the newly developed content-based image retrieval (CBIR) software can be used successfully to retrieve images of similar cases of adolescent idiopathic scoliosis (AIS) from a database to help plan treatment without adhering to a classification scheme. Methods Sixty-two operated cases of AIS were entered into the newly developed CBIR database. Five new cases of different curve patterns were used as query images. The images were fed into the CBIR database that retrieved similar images from the existing cases. These were analyzed by a senior surgeon for conformity to the query image. Results Within the limits of variability set for the query system, all the resultant images conformed to the query image. One case had no similar match in the series. The other four retrieved several images that were matching with the query. No matching case was left out in the series. The postoperative images were then analyzed to check for surgical strategies. Broad guidelines for treatment could be derived from the results. More precise query settings, inclusion of bending films, and a larger database will enhance accurate retrieval and better decision making. Conclusion The CBIR system is an effective tool for accurate documentation and retrieval of scoliosis images. Broad guidelines for surgical strategies can be made from the postoperative images of the existing cases without adhering to any classification scheme.
Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization
NASA Technical Reports Server (NTRS)
Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.
2012-01-01
The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.
Classroom Laboratory Report: Using an Image Database System in Engineering Education.
ERIC Educational Resources Information Center
Alam, Javed; And Others
1991-01-01
Describes an image database system assembled using separate computer components that was developed to overcome text-only computer hardware storage and retrieval limitations for a pavement design class. (JJK)
Murugaiyan, J; Ahrholdt, J; Kowbel, V; Roesler, U
2012-05-01
The possibility of using matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) for rapid identification of pathogenic and non-pathogenic species of the genus Prototheca has been recently demonstrated. A unique reference database of MALDI-TOF MS profiles for type and reference strains of the six generally accepted Prototheca species was established. The database quality was reinforced after the acquisition of 27 spectra for selected Prototheca strains, with three biological and technical replicates for each of 18 type and reference strains of Prototheca and four strains of Chlorella. This provides reproducible and unique spectra covering a wide m/z range (2000-20 000 Da) for each of the strains used in the present study. The reproducibility of the spectra was further confirmed by employing composite correlation index calculation and main spectra library (MSP) dendrogram creation, available with MALDI Biotyper software. The MSP dendrograms obtained were comparable with the 18S rDNA sequence-based dendrograms. These reference spectra were successfully added to the Bruker database, and the efficiency of identification was evaluated by cross-reference-based and unknown Prototheca identification. It is proposed that the addition of further strains would reinforce the reference spectra library for rapid identification of Prototheca strains to the genus and species/genotype level. © 2011 The Authors. Clinical Microbiology and Infection © 2011 European Society of Clinical Microbiology and Infectious Diseases.
NASA Astrophysics Data System (ADS)
Attallah, Bilal; Serir, Amina; Chahir, Youssef; Boudjelal, Abdelwahhab
2017-11-01
Palmprint recognition systems are dependent on feature extraction. A method of feature extraction using higher discrimination information was developed to characterize palmprint images. In this method, two individual feature extraction techniques are applied to a discrete wavelet transform of a palmprint image, and their outputs are fused. The two techniques used in the fusion are the histogram of gradient and the binarized statistical image features. They are then evaluated using an extreme learning machine classifier before selecting a feature based on principal component analysis. Three palmprint databases, the Hong Kong Polytechnic University (PolyU) Multispectral Palmprint Database, Hong Kong PolyU Palmprint Database II, and the Delhi Touchless (IIDT) Palmprint Database, are used in this study. The study shows that our method effectively identifies and verifies palmprints and outperforms other methods based on feature extraction.
ERIC Educational Resources Information Center
Grooms, David W.
1988-01-01
Discusses the quality controls imposed on text and image data that is currently being converted from paper to digital images by the Patent and Trademark Office. The methods of inspection used on text and on images are described, and the quality of the data delivered thus far is discussed. (CLB)
Annual Review of Database Development: 1992.
ERIC Educational Resources Information Center
Basch, Reva
1992-01-01
Reviews recent trends in databases and online systems. Topics discussed include new access points for established databases; acquisitions, consolidations, and competition between vendors; European coverage; international services; online reference materials, including telephone directories; political and legal materials and public records;…
Abou El Hassan, Mohamed; Stoianov, Alexandra; Araújo, Petra A T; Sadeghieh, Tara; Chan, Man Khun; Chen, Yunqi; Randell, Edward; Nieuwesteeg, Michelle; Adeli, Khosrow
2015-11-01
The CALIPER program has established a comprehensive database of pediatric reference intervals using largely the Abbott ARCHITECT biochemical assays. To expand clinical application of CALIPER reference standards, the present study is aimed at transferring CALIPER reference intervals from the Abbott ARCHITECT to Beckman Coulter AU assays. Transference of CALIPER reference intervals was performed based on the CLSI guidelines C28-A3 and EP9-A2. The new reference intervals were directly verified using up to 100 reference samples from the healthy CALIPER cohort. We found a strong correlation between Abbott ARCHITECT and Beckman Coulter AU biochemical assays, allowing the transference of the vast majority (94%; 30 out of 32 assays) of CALIPER reference intervals previously established using Abbott assays. Transferred reference intervals were, in general, similar to previously published CALIPER reference intervals, with some exceptions. Most of the transferred reference intervals were sex-specific and were verified using healthy reference samples from the CALIPER biobank based on CLSI criteria. It is important to note that the comparisons performed between the Abbott and Beckman Coulter assays make no assumptions as to assay accuracy or which system is more correct/accurate. The majority of CALIPER reference intervals were transferrable to Beckman Coulter AU assays, allowing the establishment of a new database of pediatric reference intervals. This further expands the utility of the CALIPER database to clinical laboratories using the AU assays; however, each laboratory should validate these intervals for their analytical platform and local population as recommended by the CLSI. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Cameron, M; Perry, J; Middleton, J R; Chaffer, M; Lewis, J; Keefe, G P
2018-01-01
This study evaluated MALDI-TOF mass spectrometry and a custom reference spectra expanded database for the identification of bovine-associated coagulase-negative staphylococci (CNS). A total of 861 CNS isolates were used in the study, covering 21 different CNS species. The majority of the isolates were previously identified by rpoB gene sequencing (n = 804) and the remainder were identified by sequencing of hsp60 (n = 56) and tuf (n = 1). The genotypic identification was considered the gold standard identification. Using a direct transfer protocol and the existing commercial database, MALDI-TOF mass spectrometry showed a typeability of 96.5% (831/861) and an accuracy of 99.2% (824/831). Using a custom reference spectra expanded database, which included an additional 13 in-house created reference spectra, isolates were identified by MALDI-TOF mass spectrometry with 99.2% (854/861) typeability and 99.4% (849/854) accuracy. Overall, MALDI-TOF mass spectrometry using the direct transfer method was shown to be a highly reliable tool for the identification of bovine-associated CNS. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Multiple Object Retrieval in Image Databases Using Hierarchical Segmentation Tree
ERIC Educational Resources Information Center
Chen, Wei-Bang
2012-01-01
The purpose of this research is to develop a new visual information analysis, representation, and retrieval framework for automatic discovery of salient objects of user's interest in large-scale image databases. In particular, this dissertation describes a content-based image retrieval framework which supports multiple-object retrieval. The…
Informatics in radiology: use of CouchDB for document-based storage of DICOM objects.
Rascovsky, Simón J; Delgado, Jorge A; Sanz, Alexander; Calvo, Víctor D; Castrillón, Gabriel
2012-01-01
Picture archiving and communication systems traditionally have depended on schema-based Structured Query Language (SQL) databases for imaging data management. To optimize database size and performance, many such systems store a reduced set of Digital Imaging and Communications in Medicine (DICOM) metadata, discarding informational content that might be needed in the future. As an alternative to traditional database systems, document-based key-value stores recently have gained popularity. These systems store documents containing key-value pairs that facilitate data searches without predefined schemas. Document-based key-value stores are especially suited to archive DICOM objects because DICOM metadata are highly heterogeneous collections of tag-value pairs conveying specific information about imaging modalities, acquisition protocols, and vendor-supported postprocessing options. The authors used an open-source document-based database management system (Apache CouchDB) to create and test two such databases; CouchDB was selected for its overall ease of use, capability for managing attachments, and reliance on HTTP and Representational State Transfer standards for accessing and retrieving data. A large database was created first in which the DICOM metadata from 5880 anonymized magnetic resonance imaging studies (1,949,753 images) were loaded by using a Ruby script. To provide the usual DICOM query functionality, several predefined "views" (standard queries) were created by using JavaScript. For performance comparison, the same queries were executed in both the CouchDB database and a SQL-based DICOM archive. The capabilities of CouchDB for attachment management and database replication were separately assessed in tests of a similar, smaller database. Results showed that CouchDB allowed efficient storage and interrogation of all DICOM objects; with the use of information retrieval algorithms such as map-reduce, all the DICOM metadata stored in the large database were searchable with only a minimal increase in retrieval time over that with the traditional database management system. Results also indicated possible uses for document-based databases in data mining applications such as dose monitoring, quality assurance, and protocol optimization. RSNA, 2012
A multitemporal (1979-2009) land-use/land-cover dataset of the binational Santa Cruz Watershed
2011-01-01
Trends derived from multitemporal land-cover data can be used to make informed land management decisions and to help managers model future change scenarios. We developed a multitemporal land-use/land-cover dataset for the binational Santa Cruz watershed of southern Arizona, United States, and northern Sonora, Mexico by creating a series of land-cover maps at decadal intervals (1979, 1989, 1999, and 2009) using Landsat Multispectral Scanner and Thematic Mapper data and a classification and regression tree classifier. The classification model exploited phenological changes of different land-cover spectral signatures through the use of biseasonal imagery collected during the (dry) early summer and (wet) late summer following rains from the North American monsoon. Landsat images were corrected to remove atmospheric influences, and the data were converted from raw digital numbers to surface reflectance values. The 14-class land-cover classification scheme is based on the 2001 National Land Cover Database with a focus on "Developed" land-use classes and riverine "Forest" and "Wetlands" cover classes required for specific watershed models. The classification procedure included the creation of several image-derived and topographic variables, including digital elevation model derivatives, image variance, and multitemporal Kauth-Thomas transformations. The accuracy of the land-cover maps was assessed using a random-stratified sampling design, reference aerial photography, and digital imagery. This showed high accuracy results, with kappa values (the statistical measure of agreement between map and reference data) ranging from 0.80 to 0.85.
Nine martian years of dust optical depth observations: A reference dataset
NASA Astrophysics Data System (ADS)
Montabone, Luca; Forget, Francois; Kleinboehl, Armin; Kass, David; Wilson, R. John; Millour, Ehouarn; Smith, Michael; Lewis, Stephen; Cantor, Bruce; Lemmon, Mark; Wolff, Michael
2016-07-01
We present a multi-annual reference dataset of the horizontal distribution of airborne dust from martian year 24 to 32 using observations of the martian atmosphere from April 1999 to June 2015 made by the Thermal Emission Spectrometer (TES) aboard Mars Global Surveyor, the Thermal Emission Imaging System (THEMIS) aboard Mars Odyssey, and the Mars Climate Sounder (MCS) aboard Mars Reconnaissance Orbiter (MRO). Our methodology to build the dataset works by gridding the available retrievals of column dust optical depth (CDOD) from TES and THEMIS nadir observations, as well as the estimates of this quantity from MCS limb observations. The resulting (irregularly) gridded maps (one per sol) were validated with independent observations of CDOD by PanCam cameras and Mini-TES spectrometers aboard the Mars Exploration Rovers "Spirit" and "Opportunity", by the Surface Stereo Imager aboard the Phoenix lander, and by the Compact Reconnaissance Imaging Spectrometer for Mars aboard MRO. Finally, regular maps of CDOD are produced by spatially interpolating the irregularly gridded maps using a kriging method. These latter maps are used as dust scenarios in the Mars Climate Database (MCD) version 5, and are useful in many modelling applications. The two datasets (daily irregularly gridded maps and regularly kriged maps) for the nine available martian years are publicly available as NetCDF files and can be downloaded from the MCD website at the URL: http://www-mars.lmd.jussieu.fr/mars/dust_climatology/index.html
Negative Effects of Learning Spreadsheet Management on Learning Database Management
ERIC Educational Resources Information Center
Vágner, Anikó; Zsakó, László
2015-01-01
A lot of students learn spreadsheet management before database management. Their similarities can cause a lot of negative effects when learning database management. In this article, we consider these similarities and explain what can cause problems. First, we analyse the basic concepts such as table, database, row, cell, reference, etc. Then, we…
3D change detection at street level using mobile laser scanning point clouds and terrestrial images
NASA Astrophysics Data System (ADS)
Qin, Rongjun; Gruen, Armin
2014-04-01
Automatic change detection and geo-database updating in the urban environment are difficult tasks. There has been much research on detecting changes with satellite and aerial images, but studies have rarely been performed at the street level, which is complex in its 3D geometry. Contemporary geo-databases include 3D street-level objects, which demand frequent data updating. Terrestrial images provides rich texture information for change detection, but the change detection with terrestrial images from different epochs sometimes faces problems with illumination changes, perspective distortions and unreliable 3D geometry caused by the lack of performance of automatic image matchers, while mobile laser scanning (MLS) data acquired from different epochs provides accurate 3D geometry for change detection, but is very expensive for periodical acquisition. This paper proposes a new method for change detection at street level by using combination of MLS point clouds and terrestrial images: the accurate but expensive MLS data acquired from an early epoch serves as the reference, and terrestrial images or photogrammetric images captured from an image-based mobile mapping system (MMS) at a later epoch are used to detect the geometrical changes between different epochs. The method will automatically mark the possible changes in each view, which provides a cost-efficient method for frequent data updating. The methodology is divided into several steps. In the first step, the point clouds are recorded by the MLS system and processed, with data cleaned and classified by semi-automatic means. In the second step, terrestrial images or mobile mapping images at a later epoch are taken and registered to the point cloud, and then point clouds are projected on each image by a weighted window based z-buffering method for view dependent 2D triangulation. In the next step, stereo pairs of the terrestrial images are rectified and re-projected between each other to check the geometrical consistency between point clouds and stereo images. Finally, an over-segmentation based graph cut optimization is carried out, taking into account the color, depth and class information to compute the changed area in the image space. The proposed method is invariant to light changes, robust to small co-registration errors between images and point clouds, and can be applied straightforwardly to 3D polyhedral models. This method can be used for 3D street data updating, city infrastructure management and damage monitoring in complex urban scenes.
Reengineering Workflow for Curation of DICOM Datasets.
Bennett, William; Smith, Kirk; Jarosz, Quasar; Nolan, Tracy; Bosch, Walter
2018-06-15
Reusable, publicly available data is a pillar of open science and rapid advancement of cancer imaging research. Sharing data from completed research studies not only saves research dollars required to collect data, but also helps insure that studies are both replicable and reproducible. The Cancer Imaging Archive (TCIA) is a global shared repository for imaging data related to cancer. Insuring the consistency, scientific utility, and anonymity of data stored in TCIA is of utmost importance. As the rate of submission to TCIA has been increasing, both in volume and complexity of DICOM objects stored, the process of curation of collections has become a bottleneck in acquisition of data. In order to increase the rate of curation of image sets, improve the quality of the curation, and better track the provenance of changes made to submitted DICOM image sets, a custom set of tools was developed, using novel methods for the analysis of DICOM data sets. These tools are written in the programming language perl, use the open-source database PostgreSQL, make use of the perl DICOM routines in the open-source package Posda, and incorporate DICOM diagnostic tools from other open-source packages, such as dicom3tools. These tools are referred to as the "Posda Tools." The Posda Tools are open source and available via git at https://github.com/UAMS-DBMI/PosdaTools . In this paper, we briefly describe the Posda Tools and discuss the novel methods employed by these tools to facilitate rapid analysis of DICOM data, including the following: (1) use a database schema which is more permissive, and differently normalized from traditional DICOM databases; (2) perform integrity checks automatically on a bulk basis; (3) apply revisions to DICOM datasets on an bulk basis, either through a web-based interface or via command line executable perl scripts; (4) all such edits are tracked in a revision tracker and may be rolled back; (5) a UI is provided to inspect the results of such edits, to verify that they are what was intended; (6) identification of DICOM Studies, Series, and SOP instances using "nicknames" which are persistent and have well-defined scope to make expression of reported DICOM errors easier to manage; and (7) rapidly identify potential duplicate DICOM datasets by pixel data is provided; this can be used, e.g., to identify submission subjects which may relate to the same individual, without identifying the individual.
Decelle, Johan; Romac, Sarah; Stern, Rowena F; Bendif, El Mahdi; Zingone, Adriana; Audic, Stéphane; Guiry, Michael D; Guillou, Laure; Tessier, Désiré; Le Gall, Florence; Gourvil, Priscillia; Dos Santos, Adriana L; Probert, Ian; Vaulot, Daniel; de Vargas, Colomban; Christen, Richard
2015-11-01
Photosynthetic eukaryotes have a critical role as the main producers in most ecosystems of the biosphere. The ongoing environmental metabarcoding revolution opens the perspective for holistic ecosystems biological studies of these organisms, in particular the unicellular microalgae that often lack distinctive morphological characters and have complex life cycles. To interpret environmental sequences, metabarcoding necessarily relies on taxonomically curated databases containing reference sequences of the targeted gene (or barcode) from identified organisms. To date, no such reference framework exists for photosynthetic eukaryotes. In this study, we built the PhytoREF database that contains 6490 plastidial 16S rDNA reference sequences that originate from a large diversity of eukaryotes representing all known major photosynthetic lineages. We compiled 3333 amplicon sequences available from public databases and 879 sequences extracted from plastidial genomes, and generated 411 novel sequences from cultured marine microalgal strains belonging to different eukaryotic lineages. A total of 1867 environmental Sanger 16S rDNA sequences were also included in the database. Stringent quality filtering and a phylogeny-based taxonomic classification were applied for each 16S rDNA sequence. The database mainly focuses on marine microalgae, but sequences from land plants (representing half of the PhytoREF sequences) and freshwater taxa were also included to broaden the applicability of PhytoREF to different aquatic and terrestrial habitats. PhytoREF, accessible via a web interface (http://phytoref.fr), is a new resource in molecular ecology to foster the discovery, assessment and monitoring of the diversity of photosynthetic eukaryotes using high-throughput sequencing. © 2015 John Wiley & Sons Ltd.
2008-11-01
17ºC; red: 17-18ºC. Although the image produced in Figure 9 is useful, the image itself is not the most important aspect of the process . Two...climatology for the Scotian Shelf. The database is intended for use while ashore and also while at-sea. Trial Q316 was the maiden voyage of the database...to the process of data transfer from external sources to the database, and also how the database can be restructured to be more accommodating of
National Vulnerability Database (NVD)
National Institute of Standards and Technology Data Gateway
National Vulnerability Database (NVD) (Web, free access) NVD is a comprehensive cyber security vulnerability database that integrates all publicly available U.S. Government vulnerability resources and provides references to industry resources. It is based on and synchronized with the CVE vulnerability naming standard.
Code of Federal Regulations, 2014 CFR
2014-01-01
... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Background and Definitions § 1102.6 Definitions. (a... Database. (2) Commission or CPSC means the Consumer Product Safety Commission. (3) Consumer product means a... private labeler. (7) Publicly Available Consumer Product Safety Information Database, also referred to as...
Code of Federal Regulations, 2012 CFR
2012-01-01
... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Background and Definitions § 1102.6 Definitions. (a... Database. (2) Commission or CPSC means the Consumer Product Safety Commission. (3) Consumer product means a... private labeler. (7) Publicly Available Consumer Product Safety Information Database, also referred to as...
Method and apparatus for the simultaneous display and correlation of independently generated images
Vaitekunas, Jeffrey J.; Roberts, Ronald A.
1991-01-01
An apparatus and method for location by location correlation of multiple images from Non-Destructive Evaluation (NDE) and other sources. Multiple images of a material specimen are displayed on one or more monitors of an interactive graphics system. Specimen landmarks are located in each image and mapping functions from a reference image to each other image are calcuated using the landmark locations. A location selected by positioning a cursor in the reference image is mapped to the other images and location identifiers are simultaneously displayed in those images. Movement of the cursor in the reference image causes simultaneous movement of the location identifiers in the other images to positions corresponding to the location of the reference image cursor.
Bylund, William E.; de Weber, Kevin
2010-01-01
Context: Semimembranosus tendinopathy (SMT) is an uncommon cause of chronic knee pain that is rarely described in the medical literature and may be underdiagnosed or inadequately treated owing to a lack of understanding of the condition. Evidence Acquisition: A search of the entire PubMed (MEDLINE) database using the terms knee pain semimembranosus and knee tendinitis semimembranosus, returned only 5 references about SMT—4 case series and 1 case report—and several relevant anatomical or imaging references. Results: The incidence of SMT is unknown in the athletic population and is probably more common in older patients. The usual presentation for SMT is aching posteromedial knee pain. Physical examination can usually localize the area of tenderness to the distal semimembranosus tendon or its insertion on the medial proximal tibia. In unclear cases, bone scan, magnetic resonance imaging, or ultrasound may distinguish SMT from other causes of posteromedial knee pain. Treatment should begin with relative rest, ice, nonsteroidal anti-inflammatory drugs, and rehabilitative exercise. In the minority of cases that persist greater than 3 months, a corticosteroid injection at the tendon insertion site may be effective. Surgery to reroute and reattach the tendon is rarely needed but may be effective. Conclusion: SMT is an uncommon cause of knee pain, but timely diagnosis can lead to effective treatments. PMID:23015963
Evaluation of consumer drug information databases.
Choi, J A; Sullivan, J; Pankaskie, M; Brufsky, J
1999-01-01
To evaluate prescription drug information contained in six consumer drug information databases available on CD-ROM, and to make health care professionals aware of the information provided, so that they may appropriately recommend these databases for use by their patients. Observational study of six consumer drug information databases: The Corner Drug Store, Home Medical Advisor, Mayo Clinic Family Pharmacist, Medical Drug Reference, Mosby's Medical Encyclopedia, and PharmAssist. Not applicable. Not applicable. Information on 20 frequently prescribed drugs was evaluated in each database. The databases were ranked using a point-scale system based on primary and secondary assessment criteria. For the primary assessment, 20 categories of information based on those included in the 1998 edition of the USP DI Volume II, Advice for the Patient: Drug Information in Lay Language were evaluated for each of the 20 drugs, and each database could earn up to 400 points (for example, 1 point was awarded if the database mentioned a drug's mechanism of action). For the secondary assessment, the inclusion of 8 additional features that could enhance the utility of the databases was evaluated (for example, 1 point was awarded if the database contained a picture of the drug), and each database could earn up to 8 points. The results of the primary and secondary assessments, listed in order of highest to lowest number of points earned, are as follows: Primary assessment--Mayo Clinic Family Pharmacist (379), Medical Drug Reference (251), PharmAssist (176), Home Medical Advisor (113.5), The Corner Drug Store (98), and Mosby's Medical Encyclopedia (18.5); secondary assessment--The Mayo Clinic Family Pharmacist (8), The Corner Drug Store (5), Mosby's Medical Encyclopedia (5), Home Medical Advisor (4), Medical Drug Reference (4), and PharmAssist (3). The Mayo Clinic Family Pharmacist was the most accurate and complete source of prescription drug information based on the USP DI Volume II and would be an appropriate database for health care professionals to recommend to patients.
Isakozawa, Shigeto; Fuse, Taishi; Amano, Junpei; Baba, Norio
2018-04-01
As alternatives to the diffractogram-based method in high-resolution transmission electron microscopy, a spot auto-focusing (AF) method and a spot auto-stigmation (AS) method are presented with a unique high-definition auto-correlation function (HD-ACF). The HD-ACF clearly resolves the ACF central peak region in small amorphous-thin-film images, reflecting the phase contrast transfer function. At a 300-k magnification for a 120-kV transmission electron microscope, the smallest areas used are 64 × 64 pixels (~3 nm2) for the AF and 256 × 256 pixels for the AS. A useful advantage of these methods is that the AF function has an allowable accuracy even for a low s/n (~1.0) image. A reference database on the defocus dependency of the HD-ACF by the pre-acquisition of through-focus amorphous-thin-film images must be prepared to use these methods. This can be very beneficial because the specimens are not limited to approximations of weak phase objects but can be extended to objects outside such approximations.
NASA Astrophysics Data System (ADS)
Yang, Gongping; Zhou, Guang-Tong; Yin, Yilong; Yang, Xiukun
2010-12-01
A critical step in an automatic fingerprint recognition system is the segmentation of fingerprint images. Existing methods are usually designed to segment fingerprint images originated from a certain sensor. Thus their performances are significantly affected when dealing with fingerprints collected by different sensors. This work studies the sensor interoperability of fingerprint segmentation algorithms, which refers to the algorithm's ability to adapt to the raw fingerprints obtained from different sensors. We empirically analyze the sensor interoperability problem, and effectively address the issue by proposing a [InlineEquation not available: see fulltext.]-means based segmentation method called SKI. SKI clusters foreground and background blocks of a fingerprint image based on the [InlineEquation not available: see fulltext.]-means algorithm, where a fingerprint block is represented by a 3-dimensional feature vector consisting of block-wise coherence, mean, and variance (abbreviated as CMV). SKI also employs morphological postprocessing to achieve favorable segmentation results. We perform SKI on each fingerprint to ensure sensor interoperability. The interoperability and robustness of our method are validated by experiments performed on a number of fingerprint databases which are obtained from various sensors.
NASA Astrophysics Data System (ADS)
Giraldo, Diana L.; García-Arteaga, Juan D.; Romero, Eduardo
2016-03-01
Initial diagnosis of Alzheimer's disease (AD) is based on the patient's clinical history and a battery of neuropsy-chological tests. This work presents an automatic strategy that uses Structural Magnetic Resonance Imaging (MRI) to learn brain models for different stages of the disease using information from clinical assessments. Then, a comparison of the discriminant power of the models in different anatomical areas is made by using the brain region of the models as a reference frame for the classification problem, by using the projection into the AD model a Receiver Operating Characteristic (ROC) curve is constructed. Validation was performed using a leave- one-out scheme with 86 subjects (20 AD and 60 NC) from the Open Access Series of Imaging Studies (OASIS) database. The region with the best classification performance was the left amygdala where it is possible to achieve a sensibility and specificity of 85% at the same time. The regions with the best performance, in terms of the AUC, are in strong agreement with those described as important for the diagnosis of AD in clinical practice.
Image management within a PACS
NASA Astrophysics Data System (ADS)
Glicksman, Robert A.; Prior, Fred W.; Wilson, Dennis L.
1993-09-01
The full benefits of a PACS system cannot be achieved by a departmental system, as films must still be made to service referring physicians and clinics. Therefore, a full hospital PACS must provide workstations throughout the hospital which are connected to the central file server and database, but which present `clinical' views of radiological data. In contrast to the radiologist, the clinician needs to select examinations from a `patient list' which presents the results of his/her radiology referrals. The most important data for the clinician is the radiology report, which must be immediately available upon selection of the examination. The images themselves, perhaps with annotations provided by the reading radiologist, must also be available in a few seconds from selection. Furthermore, the ability to display radiologist selected relevant historical images along with the new examination is necessary in those instances where the radiologist felt that certain historical images were important in the interpretation and diagnosis of the patient. Therefore, views of the new and historical data along clinical lines, conference preparation features, and modality and body part specific selections are also required to successfully implement a full hospital PACS. This paper describes the concepts for image selection and presentation at PACS workstations, both `diagnostic' workstations within the radiology department and `clinical' workstations which support the rest of the hospital and outpatient clinics.
Talk this way: the effect of prosodically conveyed semantic information on memory for novel words.
Shintel, Hadas; Anderson, Nathan L; Fenn, Kimberly M
2014-08-01
Speakers modulate their prosody to express not only emotional information but also semantic information (e.g., raising pitch for upward motion). Moreover, this information can help listeners infer meaning. Work investigating the communicative role of prosodically conveyed meaning has focused on reference resolution, and potential mnemonic benefits remain unexplored. We investigated the effect of prosody on memory for the meaning of novel words, even when it conveys superfluous information. Participants heard novel words, produced with congruent or incongruent prosody, and viewed image pairs representing the intended meaning and its antonym (e.g., a small and a large dog). Importantly, an arrow indicated the image representing the intended meaning, resolving the ambiguity. Participants then completed 2 memory tests, either immediately after learning or after a 24-hr delay, on which they chose an image (out of a new image pair) and a definition that best represented the word. On the image test, memory was similar on the immediate test, but incongruent prosody led to greater loss over time. On the definition test, memory was better for congruent prosody at both times. Results suggest that listeners extract semantic information from prosody even when it is redundant and that prosody can enhance memory, beyond its role in comprehension. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Space Images for NASA JPL Android Version
NASA Technical Reports Server (NTRS)
Nelson, Jon D.; Gutheinz, Sandy C.; Strom, Joshua R.; Arca, Jeremy M.; Perez, Martin; Boggs, Karen; Stanboli, Alice
2013-01-01
This software addresses the demand for easily accessible NASA JPL images and videos by providing a user friendly and simple graphical user interface that can be run via the Android platform from any location where Internet connection is available. This app is complementary to the iPhone version of the application. A backend infrastructure stores, tracks, and retrieves space images from the JPL Photojournal and Institutional Communications Web server, and catalogs the information into a streamlined rating infrastructure. This system consists of four distinguishing components: image repository, database, server-side logic, and Android mobile application. The image repository contains images from various JPL flight projects. The database stores the image information as well as the user rating. The server-side logic retrieves the image information from the database and categorizes each image for display. The Android mobile application is an interfacing delivery system that retrieves the image information from the server for each Android mobile device user. Also created is a reporting and tracking system for charting and monitoring usage. Unlike other Android mobile image applications, this system uses the latest emerging technologies to produce image listings based directly on user input. This allows for countless combinations of images returned. The backend infrastructure uses industry-standard coding and database methods, enabling future software improvement and technology updates. The flexibility of the system design framework permits multiple levels of display possibilities and provides integration capabilities. Unique features of the software include image/video retrieval from a selected set of categories, image Web links that can be shared among e-mail users, sharing to Facebook/Twitter, marking as user's favorites, and image metadata searchable for instant results.
NASA Astrophysics Data System (ADS)
Muramatsu, Chisako; Hatanaka, Yuji; Iwase, Tatsuhiko; Hara, Takeshi; Fujita, Hiroshi
2010-03-01
Abnormalities of retinal vasculatures can indicate health conditions in the body, such as the high blood pressure and diabetes. Providing automatically determined width ratio of arteries and veins (A/V ratio) on retinal fundus images may help physicians in the diagnosis of hypertensive retinopathy, which may cause blindness. The purpose of this study was to detect major retinal vessels and classify them into arteries and veins for the determination of A/V ratio. Images used in this study were obtained from DRIVE database, which consists of 20 cases each for training and testing vessel detection algorithms. Starting with the reference standard of vasculature segmentation provided in the database, major arteries and veins each in the upper and lower temporal regions were manually selected for establishing the gold standard. We applied the black top-hat transformation and double-ring filter to detect retinal blood vessels. From the extracted vessels, large vessels extending from the optic disc to temporal regions were selected as target vessels for calculation of A/V ratio. Image features were extracted from the vessel segments from quarter-disc to one disc diameter from the edge of optic discs. The target segments in the training cases were classified into arteries and veins by using the linear discriminant analysis, and the selected parameters were applied to those in the test cases. Out of 40 pairs, 30 pairs (75%) of arteries and veins in the 20 test cases were correctly classified. The result can be used for the automated calculation of A/V ratio.
Mars global digital dune database and initial science results
Hayward, R.K.; Mullins, K.F.; Fenton, L.K.; Hare, T.M.; Titus, T.N.; Bourke, M.C.; Colaprete, A.; Christensen, P.R.
2007-01-01
A new Mars Global Digital Dune Database (MGD3) constructed using Thermal Emission Imaging System (THEMIS) infrared (IR) images provides a comprehensive and quantitative view of the geographic distribution of moderate- to large-size dune fields (area >1 kM2) that will help researchers to understand global climatic and sedimentary processes that have shaped the surface of Mars. MGD3 extends from 65??N to 65??S latitude and includes ???550 dune fields, covering ???70,000 km2, with an estimated total volume of ???3,600 km3. This area, when combined with polar dune estimates, suggests moderate- to large-size dune field coverage on Mars may total ???800,000 km2, ???6 times less than the total areal estimate of ???5,000,000 km2 for terrestrial dunes. Where availability and quality of THEMIS visible (VIS) or Mars Orbiter Camera. narrow-angle (MOC NA) images allow, we classify dunes and include dune slipface measurements, which are derived from gross dune morphology and represent the prevailing wind direction at the last time of significant dune modification. For dunes located within craters, the azimuth from crater centroid to dune field centroid (referred to as dune centroid azimuth) is calculated and can provide an accurate method for tracking dune migration within smooth-floored craters. These indicators of wind direction are compared to output from a general circulation model (GCM). Dune centroid azimuth values generally correlate to regional wind patterns. Slipface orientations are less well correlated, suggesting that local topographic effects may play a larger role in dune orientation than regional winds. Copyright 2007 by the American Geophysical Union.
Shimal, A; Davies, A M; James, S L J; Grimer, R J
2010-05-01
To investigate the association of a fatigue-type stress fracture and a fibrous cortical defect/non-ossifying fibroma (FCD/NOF) of the lower limb long bones in skeletally immature patients. The patient database of a specialist orthopaedic oncology centre was searched to determine the number of skeletally immature patients (
Research on gait-based human identification
NASA Astrophysics Data System (ADS)
Li, Youguo
Gait recognition refers to automatic identification of individual based on his/her style of walking. This paper proposes a gait recognition method based on Continuous Hidden Markov Model with Mixture of Gaussians(G-CHMM). First, we initialize a Gaussian mix model for training image sequence with K-means algorithm, then train the HMM parameters using a Baum-Welch algorithm. These gait feature sequences can be trained and obtain a Continuous HMM for every person, therefore, the 7 key frames and the obtained HMM can represent each person's gait sequence. Finally, the recognition is achieved by Front algorithm. The experiments made on CASIA gait databases obtain comparatively high correction identification ratio and comparatively strong robustness for variety of bodily angle.
An X Window system for statlab results reporting.
Barrows, R. C.; Allen, B.; Fink, D. J.
1993-01-01
We have developed a system that receives "stat" results encoded in Health Level Seven from the Laboratory Information System, prints a report in destination Intensive Care Units (ICUs), and captures the data for review in a custom spreadsheet format at color X-terminals located in ICUs. Available services include a reference nomogram plot of arterial blood gas data, printed summaries, automated access to the Clinical Information System and a Medline database, electronic mail, a simulated electronic calculator, and general news and information. Security mechanisms include an audit trail of user activities on the system. Noteworthy technical aspects and non-technical factors impacting success are discussed. Images Figure 2 Figure 3 PMID:8130490
Plant Reactome: a resource for plant pathways and comparative analysis
Naithani, Sushma; Preece, Justin; D'Eustachio, Peter; Gupta, Parul; Amarasinghe, Vindhya; Dharmawardhana, Palitha D.; Wu, Guanming; Fabregat, Antonio; Elser, Justin L.; Weiser, Joel; Keays, Maria; Fuentes, Alfonso Munoz-Pomer; Petryszak, Robert; Stein, Lincoln D.; Ware, Doreen; Jaiswal, Pankaj
2017-01-01
Plant Reactome (http://plantreactome.gramene.org/) is a free, open-source, curated plant pathway database portal, provided as part of the Gramene project. The database provides intuitive bioinformatics tools for the visualization, analysis and interpretation of pathway knowledge to support genome annotation, genome analysis, modeling, systems biology, basic research and education. Plant Reactome employs the structural framework of a plant cell to show metabolic, transport, genetic, developmental and signaling pathways. We manually curate molecular details of pathways in these domains for reference species Oryza sativa (rice) supported by published literature and annotation of well-characterized genes. Two hundred twenty-two rice pathways, 1025 reactions associated with 1173 proteins, 907 small molecules and 256 literature references have been curated to date. These reference annotations were used to project pathways for 62 model, crop and evolutionarily significant plant species based on gene homology. Database users can search and browse various components of the database, visualize curated baseline expression of pathway-associated genes provided by the Expression Atlas and upload and analyze their Omics datasets. The database also offers data access via Application Programming Interfaces (APIs) and in various standardized pathway formats, such as SBML and BioPAX. PMID:27799469
Associative memory model for searching an image database by image snippet
NASA Astrophysics Data System (ADS)
Khan, Javed I.; Yun, David Y.
1994-09-01
This paper presents an associative memory called an multidimensional holographic associative computing (MHAC), which can be potentially used to perform feature based image database query using image snippet. MHAC has the unique capability to selectively focus on specific segments of a query frame during associative retrieval. As a result, this model can perform search on the basis of featural significance described by a subset of the snippet pixels. This capability is critical for visual query in image database because quite often the cognitive index features in the snippet are statistically weak. Unlike, the conventional artificial associative memories, MHAC uses a two level representation and incorporates additional meta-knowledge about the reliability status of segments of information it receives and forwards. In this paper we present the analysis of focus characteristics of MHAC.
USDA-ARS?s Scientific Manuscript database
Many species of wild game and fish that are legal to hunt or catch do not have nutrition information in the USDA National Nutrient Database for Standard Reference (SR). Among those species that lack nutrition information are brook trout. The research team worked with the Nutrient Data Laboratory wit...
The Toxicity Reference Database (ToxRefDB) contains approximately 30 years and $2 billion worth of animal studies. ToxRefDB allows scientists and the interested public to search and download thousands of animal toxicity testing results for hundreds of chemicals that were previously found only in paper documents. Currently, there are 474 chemicals in ToxRefDB, primarily the data rich pesticide active ingredients, but the number will continue to expand.
Online Database Coverage of Pharmaceutical Journals.
ERIC Educational Resources Information Center
Snow, Bonnie
1984-01-01
Describes compilation of data concerning pharmaceutical journal coverage in online databases which aid information providers in collection development and database selection. Methodology, results (a core collection, overlap, timeliness, geographic scope), and implications are discussed. Eight references and a list of 337 journals indexed online in…
23 CFR 970.204 - Management systems requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...
23 CFR 970.204 - Management systems requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...
23 CFR 970.204 - Management systems requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...
16 CFR § 1102.6 - Definitions.
Code of Federal Regulations, 2013 CFR
2013-01-01
... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Background and Definitions § 1102.6 Definitions. (a... Database. (2) Commission or CPSC means the Consumer Product Safety Commission. (3) Consumer product means a... private labeler. (7) Publicly Available Consumer Product Safety Information Database, also referred to as...
Selecting Data-Base Management Software for Microcomputers in Libraries and Information Units.
ERIC Educational Resources Information Center
Pieska, K. A. O.
1986-01-01
Presents a model for the evaluation of database management systems software from the viewpoint of librarians and information specialists. The properties of data management systems, database management systems, and text retrieval systems are outlined and compared. (10 references) (CLB)
23 CFR 970.204 - Management systems requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...
Image Detective 2.0: Engaging Citizen Scientists with NASA Astronaut Photography
NASA Technical Reports Server (NTRS)
Higgins, Melissa; Graff, Paige Valderrama; Heydorn, James; Jagge, Amy; Vanderbloemen, Lisa; Stefanov, William; Runco, Susan; Lehan, Cory; Gay, Pamela
2017-01-01
Image Detective 2.0 engages citizen scientists with NASA astronaut photography of the Earth obtained by crew members on the International Space Station (ISS). Engaged citizen scientists are helping to build a more comprehensive and searchable database by geolocating this imagery and contributing to new imagery collections. Image Detective 2.0 is the newest addition to the suite of citizen scientist projects available through CosmoQuest, an effort led by the Astronomical Society of the Pacific (ASP) and supported through a NASA Science Mission Directorate Cooperative Agreement Notice award. CosmoQuest hosts a number of citizen science projects enabling individuals from around the world to engage in authentic NASA science. Image Detective 2.0, an effort that focuses on imagery acquired by astronauts on the International Space Station, builds on work initiated in 2012 by scientists and education specialists at the NASA Johnson Space Center. Through the many lessons learned, Image Detective 2.0 enhances the original project by offering new and improved options for participation. Existing users, as well as new Image Detective participants joining through the CosmoQuest platform, gain first-hand experience working with astronaut photography and become more engaged with this valuable data being obtained from the International Space Station. Citizens around the world are captivated by astronauts living and working in space. As crew members have a unique vantage point from which to view our Earth, the Crew Earth Observations (CEO) online database, referred to as the Gateway to Astronaut Photography of Earth (https://eol.jsc.nasa.gov/), provides a means for crew members to share their unique views of our home planet from the ISS with the scientific community and the public. Astronaut photography supports multiple uses including scientific investigations, visualizations, education, and outreach. These astronaut images record how the planet is changing over time, from human-made changes like urban growth and agriculture, to natural features and landforms such as tropical cyclones, aurora, coastlines, volcanoes and more. This imagery provides researchers on Earth with data to understand the planet from the perspective of the ISS, and is a useful complement to other remotely sensed datasets collected from robotic satellite platforms.
NASA Astrophysics Data System (ADS)
Sakano, Toshikazu; Furukawa, Isao; Okumura, Akira; Yamaguchi, Takahiro; Fujii, Tetsuro; Ono, Sadayasu; Suzuki, Junji; Matsuya, Shoji; Ishihara, Teruo
2001-08-01
The wide spread of digital technology in the medical field has led to a demand for the high-quality, high-speed, and user-friendly digital image presentation system in the daily medical conferences. To fulfill this demand, we developed a presentation system for radiological and pathological images. It is composed of a super-high-definition (SHD) imaging system, a radiological image database (R-DB), a pathological image database (P-DB), and the network interconnecting these three. The R-DB consists of a 270GB RAID, a database server workstation, and a film digitizer. The P-DB includes an optical microscope, a four-million-pixel digital camera, a 90GB RAID, and a database server workstation. A 100Mbps Ethernet LAN interconnects all the sub-systems. The Web-based system operation software was developed for easy operation. We installed the whole system in NTT East Kanto Hospital to evaluate it in the weekly case conferences. The SHD system could display digital full-color images of 2048 x 2048 pixels on a 28-inch CRT monitor. The doctors evaluated the image quality and size, and found them applicable to the actual medical diagnosis. They also appreciated short image switching time that contributed to smooth presentation. Thus, we confirmed that its characteristics met the requirements.
Akbari, Hamed; Bilello, Michel; Da, Xiao; Davatzikos, Christos
2015-01-01
Evaluating various algorithms for the inter-subject registration of brain magnetic resonance images (MRI) is a necessary topic receiving growing attention. Existing studies evaluated image registration algorithms in specific tasks or using specific databases (e.g., only for skull-stripped images, only for single-site images, etc.). Consequently, the choice of registration algorithms seems task- and usage/parameter-dependent. Nevertheless, recent large-scale, often multi-institutional imaging-related studies create the need and raise the question whether some registration algorithms can 1) generally apply to various tasks/databases posing various challenges; 2) perform consistently well, and while doing so, 3) require minimal or ideally no parameter tuning. In seeking answers to this question, we evaluated 12 general-purpose registration algorithms, for their generality, accuracy and robustness. We fixed their parameters at values suggested by algorithm developers as reported in the literature. We tested them in 7 databases/tasks, which present one or more of 4 commonly-encountered challenges: 1) inter-subject anatomical variability in skull-stripped images; 2) intensity homogeneity, noise and large structural differences in raw images; 3) imaging protocol and field-of-view (FOV) differences in multi-site data; and 4) missing correspondences in pathology-bearing images. Totally 7,562 registrations were performed. Registration accuracies were measured by (multi-)expert-annotated landmarks or regions of interest (ROIs). To ensure reproducibility, we used public software tools, public databases (whenever possible), and we fully disclose the parameter settings. We show evaluation results, and discuss the performances in light of algorithms’ similarity metrics, transformation models and optimization strategies. We also discuss future directions for the algorithm development and evaluations. PMID:24951685
An image database management system for conducting CAD research
NASA Astrophysics Data System (ADS)
Gruszauskas, Nicholas; Drukker, Karen; Giger, Maryellen L.
2007-03-01
The development of image databases for CAD research is not a trivial task. The collection and management of images and their related metadata from multiple sources is a time-consuming but necessary process. By standardizing and centralizing the methods in which these data are maintained, one can generate subsets of a larger database that match the specific criteria needed for a particular research project in a quick and efficient manner. A research-oriented management system of this type is highly desirable in a multi-modality CAD research environment. An online, webbased database system for the storage and management of research-specific medical image metadata was designed for use with four modalities of breast imaging: screen-film mammography, full-field digital mammography, breast ultrasound and breast MRI. The system was designed to consolidate data from multiple clinical sources and provide the user with the ability to anonymize the data. Input concerning the type of data to be stored as well as desired searchable parameters was solicited from researchers in each modality. The backbone of the database was created using MySQL. A robust and easy-to-use interface for entering, removing, modifying and searching information in the database was created using HTML and PHP. This standardized system can be accessed using any modern web-browsing software and is fundamental for our various research projects on computer-aided detection, diagnosis, cancer risk assessment, multimodality lesion assessment, and prognosis. Our CAD database system stores large amounts of research-related metadata and successfully generates subsets of cases that match the user's desired search criteria.
Visualization and manipulating the image of a formal data structure (FDS)-based database
NASA Astrophysics Data System (ADS)
Verdiesen, Franc; de Hoop, Sylvia; Molenaar, Martien
1994-08-01
A vector map is a terrain representation with a vector-structured geometry. Molenaar formulated an object-oriented formal data structure for 3D single valued vector maps. This FDS is implemented in a database (Oracle). In this study we describe a methodology for visualizing a FDS-based database and manipulating the image. A data set retrieved by querying the database is converted into an import file for a drawing application. An objective of this study is that an end-user can alter and add terrain objects in the image. The drawing application creates an export file, that is compared with the import file. Differences between these files result in updating the database which involves checks on consistency. In this study Autocad is used for visualizing and manipulating the image of the data set. A computer program has been written for the data exchange and conversion between Oracle and Autocad. The data structure of the FDS is compared to the data structure of Autocad and the data of the FDS is converted into the structure of Autocad equal to the FDS.
Multiple Representations-Based Face Sketch-Photo Synthesis.
Peng, Chunlei; Gao, Xinbo; Wang, Nannan; Tao, Dacheng; Li, Xuelong; Li, Jie
2016-11-01
Face sketch-photo synthesis plays an important role in law enforcement and digital entertainment. Most of the existing methods only use pixel intensities as the feature. Since face images can be described using features from multiple aspects, this paper presents a novel multiple representations-based face sketch-photo-synthesis method that adaptively combines multiple representations to represent an image patch. In particular, it combines multiple features from face images processed using multiple filters and deploys Markov networks to exploit the interacting relationships between the neighboring image patches. The proposed framework could be solved using an alternating optimization strategy and it normally converges in only five outer iterations in the experiments. Our experimental results on the Chinese University of Hong Kong (CUHK) face sketch database, celebrity photos, CUHK Face Sketch FERET Database, IIIT-D Viewed Sketch Database, and forensic sketches demonstrate the effectiveness of our method for face sketch-photo synthesis. In addition, cross-database and database-dependent style-synthesis evaluations demonstrate the generalizability of this novel method and suggest promising solutions for face identification in forensic science.
Data-Based Decision Making in Education: Challenges and Opportunities
ERIC Educational Resources Information Center
Schildkamp, Kim, Ed.; Lai, Mei Kuin, Ed.; Earl, Lorna, Ed.
2013-01-01
In a context where schools are held more and more accountable for the education they provide, data-based decision making has become increasingly important. This book brings together scholars from several countries to examine data-based decision making. Data-based decision making in this book refers to making decisions based on a broad range of…
Gorman, Sean K; Slavik, Richard S; Lam, Stefanie
2012-01-01
Background: Clinicians commonly rely on tertiary drug information references to guide drug dosages for patients who are receiving continuous renal replacement therapy (CRRT). It is unknown whether the dosage recommendations in these frequently used references reflect the most current evidence. Objective: To determine the presence and accuracy of drug dosage recommendations for patients undergoing CRRT in 4 drug information references. Methods: Medications commonly prescribed during CRRT were identified from an institutional medication inventory database, and evidence-based dosage recommendations for this setting were developed from the primary and secondary literature. The American Hospital Formulary System—Drug Information (AHFS–DI), Micromedex 2.0 (specifically the DRUGDEX and Martindale databases), and the 5th edition of Drug Prescribing in Renal Failure (DPRF5) were assessed for the presence of drug dosage recommendations in the CRRT setting. The dosage recommendations in these tertiary references were compared with the recommendations derived from the primary and secondary literature to determine concordance. Results: Evidence-based drug dosage recommendations were developed for 33 medications administered in patients undergoing CRRT. The AHFS–DI provided no dosage recommendations specific to CRRT, whereas the DPRF5 provided recommendations for 27 (82%) of the medications and the Micromedex 2.0 application for 20 (61%) (13 [39%] in the DRUGDEX database and 16 [48%] in the Martindale database, with 9 medications covered by both). The dosage recommendations were in concordance with evidence-based recommendations for 12 (92%) of the 13 medications in the DRUGDEX database, 26 (96%) of the 27 in the DPRF5, and all 16 (100%) of those in the Martindale database. Conclusions: One prominent tertiary drug information resource provided no drug dosage recommendations for patients undergoing CRRT. However, 2 of the databases in an Internet-based medical information application and the latest edition of a renal specialty drug information resource provided recommendations for a majority of the medications investigated. Most dosage recommendations were similar to those derived from the primary and secondary literature. The most recent edition of the DPRF is the preferred source of information when prescribing dosage regimens for patients receiving CRRT. PMID:22783029
DNApod: DNA polymorphism annotation database from next-generation sequence read archives.
Mochizuki, Takako; Tanizawa, Yasuhiro; Fujisawa, Takatomo; Ohta, Tazro; Nikoh, Naruo; Shimizu, Tokurou; Toyoda, Atsushi; Fujiyama, Asao; Kurata, Nori; Nagasaki, Hideki; Kaminuma, Eli; Nakamura, Yasukazu
2017-01-01
With the rapid advances in next-generation sequencing (NGS), datasets for DNA polymorphisms among various species and strains have been produced, stored, and distributed. However, reliability varies among these datasets because the experimental and analytical conditions used differ among assays. Furthermore, such datasets have been frequently distributed from the websites of individual sequencing projects. It is desirable to integrate DNA polymorphism data into one database featuring uniform quality control that is distributed from a single platform at a single place. DNA polymorphism annotation database (DNApod; http://tga.nig.ac.jp/dnapod/) is an integrated database that stores genome-wide DNA polymorphism datasets acquired under uniform analytical conditions, and this includes uniformity in the quality of the raw data, the reference genome version, and evaluation algorithms. DNApod genotypic data are re-analyzed whole-genome shotgun datasets extracted from sequence read archives, and DNApod distributes genome-wide DNA polymorphism datasets and known-gene annotations for each DNA polymorphism. This new database was developed for storing genome-wide DNA polymorphism datasets of plants, with crops being the first priority. Here, we describe our analyzed data for 679, 404, and 66 strains of rice, maize, and sorghum, respectively. The analytical methods are available as a DNApod workflow in an NGS annotation system of the DNA Data Bank of Japan and a virtual machine image. Furthermore, DNApod provides tables of links of identifiers between DNApod genotypic data and public phenotypic data. To advance the sharing of organism knowledge, DNApod offers basic and ubiquitous functions for multiple alignment and phylogenetic tree construction by using orthologous gene information.
DNApod: DNA polymorphism annotation database from next-generation sequence read archives
Mochizuki, Takako; Tanizawa, Yasuhiro; Fujisawa, Takatomo; Ohta, Tazro; Nikoh, Naruo; Shimizu, Tokurou; Toyoda, Atsushi; Fujiyama, Asao; Kurata, Nori; Nagasaki, Hideki; Kaminuma, Eli; Nakamura, Yasukazu
2017-01-01
With the rapid advances in next-generation sequencing (NGS), datasets for DNA polymorphisms among various species and strains have been produced, stored, and distributed. However, reliability varies among these datasets because the experimental and analytical conditions used differ among assays. Furthermore, such datasets have been frequently distributed from the websites of individual sequencing projects. It is desirable to integrate DNA polymorphism data into one database featuring uniform quality control that is distributed from a single platform at a single place. DNA polymorphism annotation database (DNApod; http://tga.nig.ac.jp/dnapod/) is an integrated database that stores genome-wide DNA polymorphism datasets acquired under uniform analytical conditions, and this includes uniformity in the quality of the raw data, the reference genome version, and evaluation algorithms. DNApod genotypic data are re-analyzed whole-genome shotgun datasets extracted from sequence read archives, and DNApod distributes genome-wide DNA polymorphism datasets and known-gene annotations for each DNA polymorphism. This new database was developed for storing genome-wide DNA polymorphism datasets of plants, with crops being the first priority. Here, we describe our analyzed data for 679, 404, and 66 strains of rice, maize, and sorghum, respectively. The analytical methods are available as a DNApod workflow in an NGS annotation system of the DNA Data Bank of Japan and a virtual machine image. Furthermore, DNApod provides tables of links of identifiers between DNApod genotypic data and public phenotypic data. To advance the sharing of organism knowledge, DNApod offers basic and ubiquitous functions for multiple alignment and phylogenetic tree construction by using orthologous gene information. PMID:28234924
DiRoberto, Cole; Lehto, Crystal; Baccei, Steven J
2016-08-01
The purpose of this study was to improve the transcription of patient information from imaging study requisitions to the radiology information database at a single institution. Five hundred radiology reports from adult outpatient radiographic examinations were chosen randomly from the radiology information system (RIS) and categorized according to their degree of concordance with their corresponding clinical order indications. The number and types of grammatical errors and types of order forms were also recorded. Countermeasures centered on the education of the technical staff and referring physician offices and the implementation of a checklist. Another sample of 500 reports was taken after the implementation of the countermeasures and compared with the baseline data using a χ(2) test. The number of RIS indications perfectly concordant with their corresponding clinical order indications increased from 232 (46.4%) to 314 (62.8%) after the implementation of the countermeasures (P < .0001). The number of partially concordant matches due to inadequate RIS indications dropped from 162 (32.4%) to 114 (22.8%) (P < .001), whereas the number of partially concordant matches due to inadequate clinical order indications increased from 22 (4.4%) to 57 (11.4%) (P < .0001). The number of discordant pairings dropped from 84 (16.8%) to 15 (3%) (P < .0001). Technologists began to input additional patient information obtained from the patients (not present in the image requisitions) in the RIS after the implementation of the countermeasures. The education of technical staff members and the implementation of a checklist markedly improved the information provided to radiologists on image requisitions from referring providers. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Array Databases: Agile Analytics (not just) for the Earth Sciences
NASA Astrophysics Data System (ADS)
Baumann, P.; Misev, D.
2015-12-01
Gridded data, such as images, image timeseries, and climate datacubes, today are managed separately from the metadata, and with different, restricted retrieval capabilities. While databases are good at metadata modelled in tables, XML hierarchies, or RDF graphs, they traditionally do not support multi-dimensional arrays.This gap is being closed by Array Databases, pioneered by the scalable rasdaman ("raster data manager") array engine. Its declarative query language, rasql, extends SQL with array operators which are optimized and parallelized on server side. Installations can easily be mashed up securely, thereby enabling large-scale location-transparent query processing in federations. Domain experts value the integration with their commonly used tools leading to a quick learning curve.Earth, Space, and Life sciences, but also Social sciences as well as business have massive amounts of data and complex analysis challenges that are answered by rasdaman. As of today, rasdaman is mature and in operational use on hundreds of Terabytes of timeseries datacubes, with transparent query distribution across more than 1,000 nodes. Additionally, its concepts have shaped international Big Data standards in the field, including the forthcoming array extension to ISO SQL, many of which are supported by both open-source and commercial systems meantime. In the geo field, rasdaman is reference implementation for the Open Geospatial Consortium (OGC) Big Data standard, WCS, now also under adoption by ISO. Further, rasdaman is in the final stage of OSGeo incubation.In this contribution we present array queries a la rasdaman, describe the architecture and novel optimization and parallelization techniques introduced in 2015, and put this in context of the intercontinental EarthServer initiative which utilizes rasdaman for enabling agile analytics on Petascale datacubes.
NASA Astrophysics Data System (ADS)
Moldovanu, Simona; Bibicu, Dorin; Moraru, Luminita; Nicolae, Mariana Carmen
2011-12-01
Co-occurrence matrix has been applied successfully for echographic images characterization because it contains information about spatial distribution of grey-scale levels in an image. The paper deals with the analysis of pixels in selected regions of interest of an US image of the liver. The useful information obtained refers to texture features such as entropy, contrast, dissimilarity and correlation extract with co-occurrence matrix. The analyzed US images were grouped in two distinct sets: healthy liver and steatosis (or fatty) liver. These two sets of echographic images of the liver build a database that includes only histological confirmed cases: 10 images of healthy liver and 10 images of steatosis liver. The healthy subjects help to compute four textural indices and as well as control dataset. We chose to study these diseases because the steatosis is the abnormal retention of lipids in cells. The texture features are statistical measures and they can be used to characterize irregularity of tissues. The goal is to extract the information using the Nearest Neighbor classification algorithm. The K-NN algorithm is a powerful tool to classify features textures by means of grouping in a training set using healthy liver, on the one hand, and in a holdout set using the features textures of steatosis liver, on the other hand. The results could be used to quantify the texture information and will allow a clear detection between health and steatosis liver.
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
Data tables for the 1993 National Transit Database section 15 report year
DOT National Transportation Integrated Search
1994-12-01
The Data Tables For the 1993 National Transit Database Section 15 Report Year is one of three publications comprising the 1993 Annual Report. Also referred to as the National Transit Database Reporting System, it is administered by the Federal Transi...
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION FEDERAL LANDS HIGHWAYS FISH AND... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
Clapworthy, G; Krokos, M; Vasilonikolidakis, N
1997-11-01
Our paper describes an integrated methodology addressing the development of an Internet service for medical professionals, medical students and generally, people interested in medicine. The service (currently developed in the framework of IAEVA, a Telematics Application Programme project of the European Union), incorporates a mechanism for retrieving from a relational database (reference library) 3D volumetric models of human organs reconstructed from computer tomography (CT) and/or magnetic resonance imaging (MRI). Retrieval is implemented in a way transparent to the actual physical location of the database. Prospective users are provided with a Solid Object Viewer that offers them manipulation (rotation, zooming, dissection etc.) of 3D volumetric models. The service constitutes an excellent foundation of understanding for medical professionals/students and a mechanism for broad and rapid dissemination of information related to particular pathological conditions; although pathological conditions of the knee and skin are supported currently, our methodology allows easy service extension into other human organs ultimately covering the entire human body. The service accepts most Internet browsers and supports MS-Windows 32 platforms; no graphics accelerators or any specialised hardware are necessary, thereby allowing service availability to the widest possible audience. Nevertheless, the service operates in near real-time not only over high speed expensive network lines but also over low/medium network connections.
Very fast road database verification using textured 3D city models obtained from airborne imagery
NASA Astrophysics Data System (ADS)
Bulatov, Dimitri; Ziems, Marcel; Rottensteiner, Franz; Pohl, Melanie
2014-10-01
Road databases are known to be an important part of any geodata infrastructure, e.g. as the basis for urban planning or emergency services. Updating road databases for crisis events must be performed quickly and with the highest possible degree of automation. We present a semi-automatic algorithm for road verification using textured 3D city models, starting from aerial or even UAV-images. This algorithm contains two processes, which exchange input and output, but basically run independently from each other. These processes are textured urban terrain reconstruction and road verification. The first process contains a dense photogrammetric reconstruction of 3D geometry of the scene using depth maps. The second process is our core procedure, since it contains various methods for road verification. Each method represents a unique road model and a specific strategy, and thus is able to deal with a specific type of roads. Each method is designed to provide two probability distributions, where the first describes the state of a road object (correct, incorrect), and the second describes the state of its underlying road model (applicable, not applicable). Based on the Dempster-Shafer Theory, both distributions are mapped to a single distribution that refers to three states: correct, incorrect, and unknown. With respect to the interaction of both processes, the normalized elevation map and the digital orthophoto generated during 3D reconstruction are the necessary input - together with initial road database entries - for the road verification process. If the entries of the database are too obsolete or not available at all, sensor data evaluation enables classification of the road pixels of the elevation map followed by road map extraction by means of vectorization and filtering of the geometrically and topologically inconsistent objects. Depending on the time issue and availability of a geo-database for buildings, the urban terrain reconstruction procedure has semantic models of buildings, trees, and ground as output. Building s and ground are textured by means of available images. This facilitates the orientation in the model and the interactive verification of the road objects that where initially classified as unknown. The three main modules of the texturing algorithm are: Pose estimation (if the videos are not geo-referenced), occlusion analysis, and texture synthesis.
Automatic classification and detection of clinically relevant images for diabetic retinopathy
NASA Astrophysics Data System (ADS)
Xu, Xinyu; Li, Baoxin
2008-03-01
We proposed a novel approach to automatic classification of Diabetic Retinopathy (DR) images and retrieval of clinically-relevant DR images from a database. Given a query image, our approach first classifies the image into one of the three categories: microaneurysm (MA), neovascularization (NV) and normal, and then it retrieves DR images that are clinically-relevant to the query image from an archival image database. In the classification stage, the query DR images are classified by the Multi-class Multiple-Instance Learning (McMIL) approach, where images are viewed as bags, each of which contains a number of instances corresponding to non-overlapping blocks, and each block is characterized by low-level features including color, texture, histogram of edge directions, and shape. McMIL first learns a collection of instance prototypes for each class that maximizes the Diverse Density function using Expectation- Maximization algorithm. A nonlinear mapping is then defined using the instance prototypes and maps every bag to a point in a new multi-class bag feature space. Finally a multi-class Support Vector Machine is trained in the multi-class bag feature space. In the retrieval stage, we retrieve images from the archival database who bear the same label with the query image, and who are the top K nearest neighbors of the query image in terms of similarity in the multi-class bag feature space. The classification approach achieves high classification accuracy, and the retrieval of clinically-relevant images not only facilitates utilization of the vast amount of hidden diagnostic knowledge in the database, but also improves the efficiency and accuracy of DR lesion diagnosis and assessment.
Singh, Anushikha; Dutta, Malay Kishore; Sharma, Dilip Kumar
2016-10-01
Identification of fundus images during transmission and storage in database for tele-ophthalmology applications is an important issue in modern era. The proposed work presents a novel accurate method for generation of unique identification code for identification of fundus images for tele-ophthalmology applications and storage in databases. Unlike existing methods of steganography and watermarking, this method does not tamper the medical image as nothing is embedded in this approach and there is no loss of medical information. Strategic combination of unique blood vessel pattern and patient ID is considered for generation of unique identification code for the digital fundus images. Segmented blood vessel pattern near the optic disc is strategically combined with patient ID for generation of a unique identification code for the image. The proposed method of medical image identification is tested on the publically available DRIVE and MESSIDOR database of fundus image and results are encouraging. Experimental results indicate the uniqueness of identification code and lossless recovery of patient identity from unique identification code for integrity verification of fundus images. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Generating patient specific pseudo-CT of the head from MR using atlas-based regression
NASA Astrophysics Data System (ADS)
Sjölund, J.; Forsberg, D.; Andersson, M.; Knutsson, H.
2015-01-01
Radiotherapy planning and attenuation correction of PET images require simulation of radiation transport. The necessary physical properties are typically derived from computed tomography (CT) images, but in some cases, including stereotactic neurosurgery and combined PET/MR imaging, only magnetic resonance (MR) images are available. With these applications in mind, we describe how a realistic, patient-specific, pseudo-CT of the head can be derived from anatomical MR images. We refer to the method as atlas-based regression, because of its similarity to atlas-based segmentation. Given a target MR and an atlas database comprising MR and CT pairs, atlas-based regression works by registering each atlas MR to the target MR, applying the resulting displacement fields to the corresponding atlas CTs and, finally, fusing the deformed atlas CTs into a single pseudo-CT. We use a deformable registration algorithm known as the Morphon and augment it with a certainty mask that allows a tailoring of the influence certain regions are allowed to have on the registration. Moreover, we propose a novel method of fusion, wherein the collection of deformed CTs is iteratively registered to their joint mean and find that the resulting mean CT becomes more similar to the target CT. However, the voxelwise median provided even better results; at least as good as earlier work that required special MR imaging techniques. This makes atlas-based regression a good candidate for clinical use.
Cook, Tessa S; Zimmerman, Stefan L; Steingall, Scott R; Maidment, Andrew D A; Kim, Woojin; Boonn, William W
2011-01-01
There is growing interest in the ability to monitor, track, and report exposure to radiation from medical imaging. Historically, however, dose information has been stored on an image-based dose sheet, an arrangement that precludes widespread indexing. Although scanner manufacturers are beginning to include dose-related parameters in the Digital Imaging and Communications in Medicine (DICOM) headers of imaging studies, there remains a vast repository of retrospective computed tomographic (CT) data with image-based dose sheets. Consequently, it is difficult for imaging centers to monitor their dose estimates or participate in the American College of Radiology (ACR) Dose Index Registry. An automated extraction software pipeline known as Radiation Dose Intelligent Analytics for CT Examinations (RADIANCE) has been designed that quickly and accurately parses CT dose sheets to extract and archive dose-related parameters. Optical character recognition of information in the dose sheet leads to creation of a text file, which along with the DICOM study header is parsed to extract dose-related data. The data are then stored in a relational database that can be queried for dose monitoring and report creation. RADIANCE allows efficient dose analysis of CT examinations and more effective education of technologists, radiologists, and referring physicians regarding patient exposure to radiation at CT. RADIANCE also allows compliance with the ACR's dose reporting guidelines and greater awareness of patient radiation dose, ultimately resulting in improved patient care and treatment.
Exploring the feasibility of traditional image querying tasks for industrial radiographs
NASA Astrophysics Data System (ADS)
Bray, Iliana E.; Tsai, Stephany J.; Jimenez, Edward S.
2015-08-01
Although there have been great strides in object recognition with optical images (photographs), there has been comparatively little research into object recognition for X-ray radiographs. Our exploratory work contributes to this area by creating an object recognition system designed to recognize components from a related database of radiographs. Object recognition for radiographs must be approached differently than for optical images, because radiographs have much less color-based information to distinguish objects, and they exhibit transmission overlap that alters perceived object shapes. The dataset used in this work contained more than 55,000 intermixed radiographs and photographs, all in a compressed JPEG form and with multiple ways of describing pixel information. For this work, a robust and efficient system is needed to combat problems presented by properties of the X-ray imaging modality, the large size of the given database, and the quality of the images contained in said database. We have explored various pre-processing techniques to clean the cluttered and low-quality images in the database, and we have developed our object recognition system by combining multiple object detection and feature extraction methods. We present the preliminary results of the still-evolving hybrid object recognition system.
NASA Astrophysics Data System (ADS)
Cardellini, C.; Chiodini, G.; Frigeri, A.; Bagnato, E.; Aiuppa, A.; McCormick, B.
2013-12-01
The data on volcanic and non-volcanic gas emissions available online are, as today, incomplete and most importantly, fragmentary. Hence, there is need for common frameworks to aggregate available data, in order to characterize and quantify the phenomena at various spatial and temporal scales. Building on the Googas experience we are now extending its capability, particularly on the user side, by developing a new web environment for collecting and publishing data. We have started to create a new and detailed web database (MAGA: MApping GAs emissions) for the deep carbon degassing in the Mediterranean area. This project is part of the Deep Earth Carbon Degassing (DECADE) research initiative, lunched in 2012 by the Deep Carbon Observatory (DCO) to improve the global budget of endogenous carbon from volcanoes. MAGA database is planned to complement and integrate the work in progress within DECADE in developing CARD (Carbon Degassing) database. MAGA database will allow researchers to insert data interactively and dynamically into a spatially referred relational database management system, as well as to extract data. MAGA kicked-off with the database set up and a complete literature survey on publications on volcanic gas fluxes, by including data on active craters degassing, diffuse soil degassing and fumaroles both from dormant closed-conduit volcanoes (e.g., Vulcano, Phlegrean Fields, Santorini, Nysiros, Teide, etc.) and open-vent volcanoes (e.g., Etna, Stromboli, etc.) in the Mediterranean area and Azores. For each geo-located gas emission site, the database holds images and description of the site and of the emission type (e.g., diffuse emission, plume, fumarole, etc.), gas chemical-isotopic composition (when available), gas temperature and gases fluxes magnitude. Gas sampling, analysis and flux measurement methods are also reported together with references and contacts to researchers expert of the site. Data can be accessed on the network from a web interface or as a data-driven web service, where software clients can request data directly from the database. This way Geographical Information Systems (GIS) and Virtual Globes (e.g., Google Earth) can easily access the database, and data can be exchanged with other database. In details the database now includes: i) more than 1000 flux data about volcanic plume degassing from Etna (4 summit craters and bulk degassing) and Stromboli volcanoes, with time averaged CO2 fluxes of ~ 18000 and 766 t/d, respectively; ii) data from ~ 30 sites of diffuse soil degassing from Napoletan volcanoes, Azores, Canary, Etna, Stromboli, and Vulcano Island, with a wide range of CO2 fluxes (from les than 1 to 1500 t/d) and iii) several data on fumarolic emissions (~ 7 sites) with CO2 fluxes up to 1340 t/day (i.e., Stromboli). When available, time series of compositional data have been archived in the database (e.g., for Campi Flegrei fumaroles). We believe MAGA data-base is an important starting point to develop a large scale, expandable data-base aimed to excite, inspire, and encourage participation among researchers. In addition, the possibility to archive location and qualitative information for gas emission/sites not yet investigated, could stimulate the scientific community for future researches and will provide an indication on the current uncertainty on deep carbon fluxes global estimates.
Mikaelyan, Aram; Köhler, Tim; Lampert, Niclas; Rohland, Jeffrey; Boga, Hamadi; Meuser, Katja; Brune, Andreas
2015-10-01
Recent developments in sequencing technology have given rise to a large number of studies that assess bacterial diversity and community structure in termite and cockroach guts based on large amplicon libraries of 16S rRNA genes. Although these studies have revealed important ecological and evolutionary patterns in the gut microbiota, classification of the short sequence reads is limited by the taxonomic depth and resolution of the reference databases used in the respective studies. Here, we present a curated reference database for accurate taxonomic analysis of the bacterial gut microbiota of dictyopteran insects. The Dictyopteran gut microbiota reference Database (DictDb) is based on the Silva database but was significantly expanded by the addition of clones from 11 mostly unexplored termite and cockroach groups, which increased the inventory of bacterial sequences from dictyopteran guts by 26%. The taxonomic depth and resolution of DictDb was significantly improved by a general revision of the taxonomic guide tree for all important lineages, including a detailed phylogenetic analysis of the Treponema and Alistipes complexes, the Fibrobacteres, and the TG3 phylum. The performance of this first documented version of DictDb (v. 3.0) using the revised taxonomic guide tree in the classification of short-read libraries obtained from termites and cockroaches was highly superior to that of the current Silva and RDP databases. DictDb uses an informative nomenclature that is consistent with the literature also for clades of uncultured bacteria and provides an invaluable tool for anyone exploring the gut community structure of termites and cockroaches. Copyright © 2015 Elsevier GmbH. All rights reserved.
CHERNOLITTM. Chernobyl Bibliographic Search System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caff, F., Jr.; Kennedy, R.A.; Mahaffey, J.A.
1992-03-02
The Chernobyl Bibliographic Search System (Chernolit TM) provides bibliographic data in a usable format for research studies relating to the Chernobyl nuclear accident that occurred in the former Ukrainian Republic of the USSR in 1986. Chernolit TM is a portable and easy to use product. The bibliographic data is provided under the control of a graphical user interface so that the user may quickly and easily retrieve pertinent information from the large database. The user may search the database for occurrences of words, names, or phrases; view bibliographic references on screen; and obtain reports of selected references. Reports may bemore » viewed on the screen, printed, or accumulated in a folder that is written to a disk file when the user exits the software. Chernolit TM provides a cost-effective alternative to multiple, independent literature searches. Forty-five hundred references concerning the accident, including abstracts, are distributed with Chernolit TM. The data contained in the database were obtained from electronic literature searches and from requested donations from individuals and organizations. These literature searches interrogated the Energy Science and Technology database (formerly DOE ENERGY) of the DIALOG Information Retrieval Service. Energy Science and Technology, provided by the U.S. DOE, Washington, D.C., is a multi-disciplinary database containing references to the world`s scientific and technical literature on energy. All unclassified information processed at the Office of Scientific and Technical Information (OSTI) of the U.S. DOE is included in the database. In addition, information on many documents has been manually added to Chernolit TM. Most of this information was obtained in response to requests for data sent to people and/or organizations throughout the world.« less
Chernobyl Bibliographic Search System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carr, Jr, F.; Kennedy, R. A.; Mahaffey, J. A.
1992-05-11
The Chernobyl Bibliographic Search System (Chernolit TM) provides bibliographic data in a usable format for research studies relating to the Chernobyl nuclear accident that occurred in the former Ukrainian Republic of the USSR in 1986. Chernolit TM is a portable and easy to use product. The bibliographic data is provided under the control of a graphical user interface so that the user may quickly and easily retrieve pertinent information from the large database. The user may search the database for occurrences of words, names, or phrases; view bibliographic references on screen; and obtain reports of selected references. Reports may bemore » viewed on the screen, printed, or accumulated in a folder that is written to a disk file when the user exits the software. Chernolit TM provides a cost-effective alternative to multiple, independent literature searches. Forty-five hundred references concerning the accident, including abstracts, are distributed with Chernolit TM. The data contained in the database were obtained from electronic literature searches and from requested donations from individuals and organizations. These literature searches interrogated the Energy Science and Technology database (formerly DOE ENERGY) of the DIALOG Information Retrieval Service. Energy Science and Technology, provided by the U.S. DOE, Washington, D.C., is a multi-disciplinary database containing references to the world''s scientific and technical literature on energy. All unclassified information processed at the Office of Scientific and Technical Information (OSTI) of the U.S. DOE is included in the database. In addition, information on many documents has been manually added to Chernolit TM. Most of this information was obtained in response to requests for data sent to people and/or organizations throughout the world.« less
Using Third Party Data to Update a Reference Dataset in a Quality Evaluation Service
NASA Astrophysics Data System (ADS)
Xavier, E. M. A.; Ariza-López, F. J.; Ureña-Cámara, M. A.
2016-06-01
Nowadays it is easy to find many data sources for various regions around the globe. In this 'data overload' scenario there are few, if any, information available about the quality of these data sources. In order to easily provide these data quality information we presented the architecture of a web service for the automation of quality control of spatial datasets running over a Web Processing Service (WPS). For quality procedures that require an external reference dataset, like positional accuracy or completeness, the architecture permits using a reference dataset. However, this reference dataset is not ageless, since it suffers the natural time degradation inherent to geospatial features. In order to mitigate this problem we propose the Time Degradation & Updating Module which intends to apply assessed data as a tool to maintain the reference database updated. The main idea is to utilize datasets sent to the quality evaluation service as a source of 'candidate data elements' for the updating of the reference database. After the evaluation, if some elements of a candidate dataset reach a determined quality level, they can be used as input data to improve the current reference database. In this work we present the first design of the Time Degradation & Updating Module. We believe that the outcomes can be applied in the search of a full-automatic on-line quality evaluation platform.
The Joint Committee for Traceability in Laboratory Medicine (JCTLM) - its history and operation.
Jones, Graham R D; Jackson, Craig
2016-01-30
The Joint Committee for Traceability in Laboratory Medicine (JCTLM) was formed to bring together the sciences of metrology, laboratory medicine and laboratory quality management. The aim of this collaboration is to support worldwide comparability and equivalence of measurement results in clinical laboratories for the purpose of improving healthcare. The JCTLM has its origins in the activities of international metrology treaty organizations, professional societies and federations devoted to improving measurement quality in physical, chemical and medical sciences. The three founding organizations, the International Committee for Weights and Measures (CIPM), the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) and the International Laboratory Accreditation Cooperation (ILAC) are the leaders of this activity. The main service of the JCTLM is a web-based database with a list of reference materials, reference methods and reference measurement services meeting appropriate international standards. This database allows manufacturers to select references for assay traceability and provides support for suppliers of these services. As of mid 2015 the database lists 295 reference materials for 162 analytes, 170 reference measurement procedures for 79 analytes and 130 reference measurement services for 39 analytes. There remains a need for the development and implementation of metrological traceability in many areas of laboratory medicine and the JCTLM will continue to promote these activities into the future. Copyright © 2015 Elsevier B.V. All rights reserved.
Semi-automatic feedback using concurrence between mixture vectors for general databases
NASA Astrophysics Data System (ADS)
Larabi, Mohamed-Chaker; Richard, Noel; Colot, Olivier; Fernandez-Maloigne, Christine
2001-12-01
This paper describes how a query system can exploit the basic knowledge by employing semi-automatic relevance feedback to refine queries and runtimes. For general databases, it is often useless to call complex attributes, because we have not sufficient information about images in the database. Moreover, these images can be topologically very different from one to each other and an attribute that is powerful for a database category may be very powerless for the other categories. The idea is to use very simple features, such as color histogram, correlograms, Color Coherence Vectors (CCV), to fill out the signature vector. Then, a number of mixture vectors is prepared depending on the number of very distinctive categories in the database. Knowing that a mixture vector is a vector containing the weight of each attribute that will be used to compute a similarity distance. We post a query in the database using successively all the mixture vectors defined previously. We retain then the N first images for each vector in order to make a mapping using the following information: Is image I present in several mixture vectors results? What is its rank in the results? These informations allow us to switch the system on an unsupervised relevance feedback or user's feedback (supervised feedback).
NASA Technical Reports Server (NTRS)
McMillin, Naomi; Allen, Jerry; Erickson, Gary; Campbell, Jim; Mann, Mike; Kubiatko, Paul; Yingling, David; Mason, Charlie
1999-01-01
The objective was to experimentally evaluate the longitudinal and lateral-directional stability and control characteristics of the Reference H configuration at supersonic and transonic speeds. A series of conventional and alternate control devices were also evaluated at supersonic and transonic speeds. A database on the conventional and alternate control devices was to be created for use in the HSR program.
Intelligent Interfaces for Mining Large-Scale RNAi-HCS Image Databases
Lin, Chen; Mak, Wayne; Hong, Pengyu; Sepp, Katharine; Perrimon, Norbert
2010-01-01
Recently, High-content screening (HCS) has been combined with RNA interference (RNAi) to become an essential image-based high-throughput method for studying genes and biological networks through RNAi-induced cellular phenotype analyses. However, a genome-wide RNAi-HCS screen typically generates tens of thousands of images, most of which remain uncategorized due to the inadequacies of existing HCS image analysis tools. Until now, it still requires highly trained scientists to browse a prohibitively large RNAi-HCS image database and produce only a handful of qualitative results regarding cellular morphological phenotypes. For this reason we have developed intelligent interfaces to facilitate the application of the HCS technology in biomedical research. Our new interfaces empower biologists with computational power not only to effectively and efficiently explore large-scale RNAi-HCS image databases, but also to apply their knowledge and experience to interactive mining of cellular phenotypes using Content-Based Image Retrieval (CBIR) with Relevance Feedback (RF) techniques. PMID:21278820
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doak, J. E.; Prasad, Lakshman
2002-01-01
This paper discusses the use of Python in a computer vision (CV) project. We begin by providing background information on the specific approach to CV employed by the project. This includes a brief discussion of Constrained Delaunay Triangulation (CDT), the Chordal Axis Transform (CAT), shape feature extraction and syntactic characterization, and normalization of strings representing objects. (The terms 'object' and 'blob' are used interchangeably, both referring to an entity extracted from an image.) The rest of the paper focuses on the use of Python in three critical areas: (1) interactions with a MySQL database, (2) rapid prototyping of algorithms, andmore » (3) gluing together all components of the project including existing C and C++ modules. For (l), we provide a schema definition and discuss how the various tables interact to represent objects in the database as tree structures. (2) focuses on an algorithm to create a hierarchical representation of an object, given its string representation, and an algorithm to match unknown objects against objects in a database. And finally, (3) discusses the use of Boost Python to interact with the pre-existing C and C++ code that creates the CDTs and CATS, performs shape feature extraction and syntactic characterization, and normalizes object strings. The paper concludes with a vision of the future use of Python for the CV project.« less
Bio-psycho-social factors affecting sexual self-concept: A systematic review.
Potki, Robabeh; Ziaei, Tayebe; Faramarzi, Mahbobeh; Moosazadeh, Mahmood; Shahhosseini, Zohreh
2017-09-01
Nowadays, it is believed that mental and emotional aspects of sexual well-being are the important aspects of sexual health. Sexual self-concept is a major component of sexual health and the core of sexuality. It is defined as the cognitive perspective concerning the sexual aspects of 'self' and refers to the individual's self-perception as a sexual creature. The aim of this study was to assess the different factors affecting sexual self-concept. English electronic databases including PubMed, Scopus, Web of Science and Google Scholar as well as two Iranian databases including Scientific Information Database and Iranmedex were searched for English and Persian-language articles published between 1996 and 2016. Of 281 retrieved articles, 37 articles were finally included for writing this review article. Factors affecting sexual self-concept were categorized to biological, psychological and social factors. In the category of biological factors, age gender, marital status, race, disability and sexual transmitted infections are described. In the psychological category, the impact of body image, sexual abuse in childhood and mental health history are present. Lastly, in the social category, the roles of parents, peers and the media are discussed. As the development of sexual self-concept is influenced by multiple events in individuals' lives, to promotion of sexual self-concept, an integrated implementation of health policies is recommended.
An end to end secure CBIR over encrypted medical database.
Bellafqira, Reda; Coatrieux, Gouenou; Bouslimi, Dalel; Quellec, Gwenole
2016-08-01
In this paper, we propose a new secure content based image retrieval (SCBIR) system adapted to the cloud framework. This solution allows a physician to retrieve images of similar content within an outsourced and encrypted image database, without decrypting them. Contrarily to actual CBIR approaches in the encrypted domain, the originality of the proposed scheme stands on the fact that the features extracted from the encrypted images are themselves encrypted. This is achieved by means of homomorphic encryption and two non-colluding servers, we however both consider as honest but curious. In that way an end to end secure CBIR process is ensured. Experimental results carried out on a diabetic retinopathy database encrypted with the Paillier cryptosystem indicate that our SCBIR achieves retrieval performance as good as if images were processed in their non-encrypted form.
Palmprint Recognition Across Different Devices.
Jia, Wei; Hu, Rong-Xiang; Gui, Jie; Zhao, Yang; Ren, Xiao-Ming
2012-01-01
In this paper, the problem of Palmprint Recognition Across Different Devices (PRADD) is investigated, which has not been well studied so far. Since there is no publicly available PRADD image database, we created a non-contact PRADD image database containing 12,000 grayscale captured from 100 subjects using three devices, i.e., one digital camera and two smart-phones. Due to the non-contact image acquisition used, rotation and scale changes between different images captured from a same palm are inevitable. We propose a robust method to calculate the palm width, which can be effectively used for scale normalization of palmprints. On this PRADD image database, we evaluate the recognition performance of three different methods, i.e., subspace learning method, correlation method, and orientation coding based method, respectively. Experiments results show that orientation coding based methods achieved promising recognition performance for PRADD.
Palmprint Recognition across Different Devices
Jia, Wei; Hu, Rong-Xiang; Gui, Jie; Zhao, Yang; Ren, Xiao-Ming
2012-01-01
In this paper, the problem of Palmprint Recognition Across Different Devices (PRADD) is investigated, which has not been well studied so far. Since there is no publicly available PRADD image database, we created a non-contact PRADD image database containing 12,000 grayscale captured from 100 subjects using three devices, i.e., one digital camera and two smart-phones. Due to the non-contact image acquisition used, rotation and scale changes between different images captured from a same palm are inevitable. We propose a robust method to calculate the palm width, which can be effectively used for scale normalization of palmprints. On this PRADD image database, we evaluate the recognition performance of three different methods, i.e., subspace learning method, correlation method, and orientation coding based method, respectively. Experiments results show that orientation coding based methods achieved promising recognition performance for PRADD. PMID:22969380
NASA Technical Reports Server (NTRS)
vonOfenheim. William H. C.; Heimerl, N. Lynn; Binkley, Robert L.; Curry, Marty A.; Slater, Richard T.; Nolan, Gerald J.; Griswold, T. Britt; Kovach, Robert D.; Corbin, Barney H.; Hewitt, Raymond W.
1998-01-01
This paper discusses the technical aspects of and the project background for the NASA Image exchange (NIX). NIX, which provides a single entry point to search selected image databases at the NASA Centers, is a meta-search engine (i.e., a search engine that communicates with other search engines). It uses these distributed digital image databases to access photographs, animations, and their associated descriptive information (meta-data). NIX is available for use at the following URL: http://nix.nasa.gov./NIX, which was sponsored by NASAs Scientific and Technical Information (STI) Program, currently serves images from seven NASA Centers. Plans are under way to link image databases from three additional NASA Centers. images and their associated meta-data, which are accessible by NIX, reside at the originating Centers, and NIX utilizes a virtual central site that communicates with each of these sites. Incorporated into the virtual central site are several protocols to support searches from a diverse collection of database engines. The searches are performed in parallel to ensure optimization of response times. To augment the search capability, browse functionality with pre-defined categories has been built into NIX, thereby ensuring dissemination of 'best-of-breed' imagery. As a final recourse, NIX offers access to a help desk via an on-line form to help locate images and information either within the scope of NIX or from available external sources.
USDA Branded Food Products Database, Release 2
USDA-ARS?s Scientific Manuscript database
The USDA Branded Food Products Database is the ongoing result of a Public-Private Partnership (PPP), whose goal is to enhance public health and the sharing of open data by complementing the USDA National Nutrient Database for Standard Reference (SR) with nutrient composition of branded foods and pri...
RefEx, a reference gene expression dataset as a web tool for the functional analysis of genes.
Ono, Hiromasa; Ogasawara, Osamu; Okubo, Kosaku; Bono, Hidemasa
2017-08-29
Gene expression data are exponentially accumulating; thus, the functional annotation of such sequence data from metadata is urgently required. However, life scientists have difficulty utilizing the available data due to its sheer magnitude and complicated access. We have developed a web tool for browsing reference gene expression pattern of mammalian tissues and cell lines measured using different methods, which should facilitate the reuse of the precious data archived in several public databases. The web tool is called Reference Expression dataset (RefEx), and RefEx allows users to search by the gene name, various types of IDs, chromosomal regions in genetic maps, gene family based on InterPro, gene expression patterns, or biological categories based on Gene Ontology. RefEx also provides information about genes with tissue-specific expression, and the relative gene expression values are shown as choropleth maps on 3D human body images from BodyParts3D. Combined with the newly incorporated Functional Annotation of Mammals (FANTOM) dataset, RefEx provides insight regarding the functional interpretation of unfamiliar genes. RefEx is publicly available at http://refex.dbcls.jp/.
RefEx, a reference gene expression dataset as a web tool for the functional analysis of genes
Ono, Hiromasa; Ogasawara, Osamu; Okubo, Kosaku; Bono, Hidemasa
2017-01-01
Gene expression data are exponentially accumulating; thus, the functional annotation of such sequence data from metadata is urgently required. However, life scientists have difficulty utilizing the available data due to its sheer magnitude and complicated access. We have developed a web tool for browsing reference gene expression pattern of mammalian tissues and cell lines measured using different methods, which should facilitate the reuse of the precious data archived in several public databases. The web tool is called Reference Expression dataset (RefEx), and RefEx allows users to search by the gene name, various types of IDs, chromosomal regions in genetic maps, gene family based on InterPro, gene expression patterns, or biological categories based on Gene Ontology. RefEx also provides information about genes with tissue-specific expression, and the relative gene expression values are shown as choropleth maps on 3D human body images from BodyParts3D. Combined with the newly incorporated Functional Annotation of Mammals (FANTOM) dataset, RefEx provides insight regarding the functional interpretation of unfamiliar genes. RefEx is publicly available at http://refex.dbcls.jp/. PMID:28850115
NASA Astrophysics Data System (ADS)
Schmitz, Marion; Mazzarella, J. M.; Madore, B. F.; Ogle, P. M.; Ebert, R.; Baker, K.; Chan, H.; Chen, X.; Fadda, D.; Frayer, C.; Jacobson, J. D.; LaGue, C.; Lo, T. M.; Pevunova, O.; Terek, S.; Steer, I.
2014-01-01
At the urging of the NASA/IPAC Extragalactic Database (NED) Users Committee, the NED Team has prepared and published on its website a new document titled "Best Practices for Data Publication to Facilitate Integration into NED: A Reference Guide for Authors" (http://ned.ipac.caltech.edu/docs/BPDP/NED_BPDP.pdf). We hope that journal publishers will incorporate links to this living document in their Instructions to Authors to provide a practical reference for authors, referees, and science editors so as to help avoid various pitfalls that often impede the interpretation of data and metadata, and also delay their integration into NED, SIMBAD, ADS and other systems. In particular, we discuss the importance of using proper naming conventions, providing the epoch and system of coordinates, including units and uncertainties, and giving sufficient metadata for the unambiguous interpretation of tabular, imaging, and spectral data. The biggest impediments to the assimilation of new data from the literature into NED are ambiguous object names and non-unique, coordinate-based identifiers. A Checklist of Recommendations will be presented which includes links to sections of the Best Practices document that provide further examples, explanation, and rationale.
Computer-assisted abdominal surgery: new technologies.
Kenngott, H G; Wagner, M; Nickel, F; Wekerle, A L; Preukschas, A; Apitz, M; Schulte, T; Rempel, R; Mietkowski, P; Wagner, F; Termer, A; Müller-Stich, Beat P
2015-04-01
Computer-assisted surgery is a wide field of technologies with the potential to enable the surgeon to improve efficiency and efficacy of diagnosis, treatment, and clinical management. This review provides an overview of the most important new technologies and their applications. A MEDLINE database search was performed revealing a total of 1702 references. All references were considered for information on six main topics, namely image guidance and navigation, robot-assisted surgery, human-machine interface, surgical processes and clinical pathways, computer-assisted surgical training, and clinical decision support. Further references were obtained through cross-referencing the bibliography cited in each work. Based on their respective field of expertise, the authors chose 64 publications relevant for the purpose of this review. Computer-assisted systems are increasingly used not only in experimental studies but also in clinical studies. Although computer-assisted abdominal surgery is still in its infancy, the number of studies is constantly increasing, and clinical studies start showing the benefits of computers used not only as tools of documentation and accounting but also for directly assisting surgeons during diagnosis and treatment of patients. Further developments in the field of clinical decision support even have the potential of causing a paradigm shift in how patients are diagnosed and treated.
Content Based Image Retrieval based on Wavelet Transform coefficients distribution
Lamard, Mathieu; Cazuguel, Guy; Quellec, Gwénolé; Bekri, Lynda; Roux, Christian; Cochener, Béatrice
2007-01-01
In this paper we propose a content based image retrieval method for diagnosis aid in medical fields. We characterize images without extracting significant features by using distribution of coefficients obtained by building signatures from the distribution of wavelet transform. The research is carried out by computing signature distances between the query and database images. Several signatures are proposed; they use a model of wavelet coefficient distribution. To enhance results, a weighted distance between signatures is used and an adapted wavelet base is proposed. Retrieval efficiency is given for different databases including a diabetic retinopathy, a mammography and a face database. Results are promising: the retrieval efficiency is higher than 95% for some cases using an optimization process. PMID:18003013
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Jiahui; Engelmann, Roger; Li Qiang
2007-12-15
Accurate segmentation of pulmonary nodules in computed tomography (CT) is an important and difficult task for computer-aided diagnosis of lung cancer. Therefore, the authors developed a novel automated method for accurate segmentation of nodules in three-dimensional (3D) CT. First, a volume of interest (VOI) was determined at the location of a nodule. To simplify nodule segmentation, the 3D VOI was transformed into a two-dimensional (2D) image by use of a key 'spiral-scanning' technique, in which a number of radial lines originating from the center of the VOI spirally scanned the VOI from the 'north pole' to the 'south pole'. Themore » voxels scanned by the radial lines provided a transformed 2D image. Because the surface of a nodule in the 3D image became a curve in the transformed 2D image, the spiral-scanning technique considerably simplified the segmentation method and enabled reliable segmentation results to be obtained. A dynamic programming technique was employed to delineate the 'optimal' outline of a nodule in the 2D image, which corresponded to the surface of the nodule in the 3D image. The optimal outline was then transformed back into 3D image space to provide the surface of the nodule. An overlap between nodule regions provided by computer and by the radiologists was employed as a performance metric for evaluating the segmentation method. The database included two Lung Imaging Database Consortium (LIDC) data sets that contained 23 and 86 CT scans, respectively, with 23 and 73 nodules that were 3 mm or larger in diameter. For the two data sets, six and four radiologists manually delineated the outlines of the nodules as reference standards in a performance evaluation for nodule segmentation. The segmentation method was trained on the first and was tested on the second LIDC data sets. The mean overlap values were 66% and 64% for the nodules in the first and second LIDC data sets, respectively, which represented a higher performance level than those of two existing segmentation methods that were also evaluated by use of the LIDC data sets. The segmentation method provided relatively reliable results for pulmonary nodule segmentation and would be useful for lung cancer quantification, detection, and diagnosis.« less
The International Outer Planets Watch atmospheres node database of giant-planet images
NASA Astrophysics Data System (ADS)
Hueso, R.; Legarreta, J.; Sánchez-Lavega, A.; Rojas, J. F.; Gómez-Forrellad, J. M.
2011-10-01
The Atmospheres Node of the International Outer Planets Watch (IOPW) is aimed to encourage the observations and study of the atmospheres of the Giant Planets. One of its main activities is to provide an interaction between the professional and amateur astronomical communities maintaining an online and fully searchable database of images of the giant planets obtained from amateur astronomers and available to both professional and amateurs [1]. The IOPW database contains about 13,000 image observations of Jupiter and Saturn obtained in the visible range with a few contributions of Uranus and Neptune. We describe the organization and structure of the database as posted in the Internet and in particular the PVOL software (Planetary Virtual Observatory & Laboratory) designed to manage the site and based in concepts from Virtual Observatory projects.
Intelligent communication assistant for databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakobson, G.; Shaked, V.; Rowley, S.
1983-01-01
An intelligent communication assistant for databases, called FRED (front end for databases) is explored. FRED is designed to facilitate access to database systems by users of varying levels of experience. FRED is a second generation of natural language front-ends for databases and intends to solve two critical interface problems existing between end-users and databases: connectivity and communication problems. The authors report their experiences in developing software for natural language query processing, dialog control, and knowledge representation, as well as the direction of future work. 10 references.
Oubel, Estanislao; Bonnard, Eric; Sueoka-Aragane, Naoko; Kobayashi, Naomi; Charbonnier, Colette; Yamamichi, Junta; Mizobe, Hideaki; Kimura, Shinya
2015-02-01
Lesion volume is considered as a promising alternative to Response Evaluation Criteria in Solid Tumors (RECIST) to make tumor measurements more accurate and consistent, which would enable an earlier detection of temporal changes. In this article, we report the results of a pilot study aiming at evaluating the effects of a consensual lesion selection on volume-based response (VBR) assessments. Eleven patients with lung computed tomography scans acquired at three time points were selected from Reference Image Database to Evaluate Response to therapy in lung cancer (RIDER) and proprietary databases. Images were analyzed according to RECIST 1.1 and VBR criteria by three readers working in different geographic locations. Cloud solutions were used to connect readers and carry out a consensus process on the selection of lesions used for computing response. Because there are not currently accepted thresholds for computing VBR, we have applied a set of thresholds based on measurement variability (-35% and +55%). The benefit of this consensus was measured in terms of multiobserver agreement by using Fleiss kappa (κfleiss) and corresponding standard errors (SE). VBR after consensual selection of target lesions allowed to obtain κfleiss = 0.85 (SE = 0.091), which increases up to 0.95 (SE = 0.092), if an extra consensus on new lesions is added. As a reference, the agreement when applying RECIST without consensus was κfleiss = 0.72 (SE = 0.088). These differences were found to be statistically significant according to a z-test. An agreement on the selection of lesions allows reducing the inter-reader variability when computing VBR. Cloud solutions showed to be an interesting and feasible strategy for standardizing response evaluations, reducing variability, and increasing consistency of results in multicenter clinical trials. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
The development of a population of 4D pediatric XCAT phantoms for imaging research and optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Segars, W. P., E-mail: paul.segars@duke.edu; Norris, Hannah; Sturgeon, Gregory M.
Purpose: We previously developed a set of highly detailed 4D reference pediatric extended cardiac-torso (XCAT) phantoms at ages of newborn, 1, 5, 10, and 15 yr with organ and tissue masses matched to ICRP Publication 89 values. In this work, we extended this reference set to a series of 64 pediatric phantoms of varying age and height and body mass percentiles representative of the public at large. The models will provide a library of pediatric phantoms for optimizing pediatric imaging protocols. Methods: High resolution positron emission tomography-computed tomography data obtained from the Duke University database were reviewed by a practicingmore » experienced radiologist for anatomic regularity. The CT portion of the data was then segmented with manual and semiautomatic methods to form a target model defined using nonuniform rational B-spline surfaces. A multichannel large deformation diffeomorphic metric mapping algorithm was used to calculate the transform from the best age matching pediatric XCAT reference phantom to the patient target. The transform was used to complete the target, filling in the nonsegmented structures and defining models for the cardiac and respiratory motions. The complete phantoms, consisting of thousands of structures, were then manually inspected for anatomical accuracy. The mass for each major tissue was calculated and compared to linearly interpolated ICRP values for different ages. Results: Sixty four new pediatric phantoms were created in this manner. Each model contains the same level of detail as the original XCAT reference phantoms and also includes parameterized models for the cardiac and respiratory motions. For the phantoms that were 10 yr old and younger, we included both sets of reproductive organs. This gave them the capability to simulate both male and female anatomy. With this, the population can be expanded to 92. Wide anatomical variation was clearly seen amongst the phantom models, both in organ shape and size, even for models of the same age and sex. The phantoms can be combined with existing simulation packages to generate realistic pediatric imaging data from different modalities. Conclusions: This work provides a large cohort of highly detailed pediatric phantoms with 4D capabilities of varying age, height, and body mass. The population of phantoms will provide a vital tool with which to optimize 3D and 4D pediatric imaging devices and techniques in terms of image quality and radiation-absorbed dose.« less
NASA Technical Reports Server (NTRS)
Kelley, Steve; Roussopoulos, Nick; Sellis, Timos
1992-01-01
The goal of the Universal Index System (UIS), is to provide an easy-to-use and reliable interface to many different kinds of database systems. The impetus for this system was to simplify database index management for users, thus encouraging the use of indexes. As the idea grew into an actual system design, the concept of increasing database performance by facilitating the use of time-saving techniques at the user level became a theme for the project. This Final Report describes the Design, the Implementation of UIS, and its Language Interfaces. It also includes the User's Guide and the Reference Manual.
Parkhill, Anne; Hill, Kelvin
2009-03-01
The Australian National Stroke Foundation appointed a search specialist to find the best available evidence for the second edition of its Clinical Guidelines for Acute Stroke Management. To identify the relative effectiveness of differing evidence sources for the guideline update. We searched and reviewed references from five valid evidence sources for clinical and economic questions: (i) electronic databases; (ii) reference lists of relevant systematic reviews, guidelines, and/or primary studies; (iii) table of contents of a number of key journals for the last 6 months; (iv) internet/grey literature; and (v) experts. Reference sources were recorded, quantified, and analysed. In the clinical portion of the guidelines document, there was a greater use of previous knowledge and sources other than electronic databases for evidence, while there was a greater use of electronic databases for the economic section. The results confirmed that searchers need to be aware of the context and range of sources for evidence searches. For best available evidence, searchers cannot rely solely on electronic databases and need to encompass many different media and sources.
Okuda, Kyohei; Sakimoto, Shota; Fujii, Susumu; Ida, Tomonobu; Moriyama, Shigeru
The frame-of-reference using computed-tomography (CT) coordinate system on single-photon emission computed tomography (SPECT) reconstruction is one of the advanced characteristics of the xSPECT reconstruction system. The aim of this study was to reveal the influence of the high-resolution frame-of-reference on the xSPECT reconstruction. 99m Tc line-source phantom and National Electrical Manufacturers Association (NEMA) image quality phantom were scanned using the SPECT/CT system. xSPECT reconstructions were performed with the reference CT images in different sizes of the display field-of-view (DFOV) and pixel. The pixel sizes of the reconstructed xSPECT images were close to 2.4 mm, which is acquired as originally projection data, even if the reference CT resolution was varied. The full width at half maximum (FWHM) of the line-source, absolute recovery coefficient, and background variability of image quality phantom were independent on the sizes of DFOV in the reference CT images. The results of this study revealed that the image quality of the reconstructed xSPECT images is not influenced by the resolution of frame-of-reference on SPECT reconstruction.
Image-based diagnostic aid for interstitial lung disease with secondary data integration
NASA Astrophysics Data System (ADS)
Depeursinge, Adrien; Müller, Henning; Hidki, Asmâa; Poletti, Pierre-Alexandre; Platon, Alexandra; Geissbuhler, Antoine
2007-03-01
Interstitial lung diseases (ILDs) are a relatively heterogeneous group of around 150 illnesses with often very unspecific symptoms. The most complete imaging method for the characterisation of ILDs is the high-resolution computed tomography (HRCT) of the chest but a correct interpretation of these images is difficult even for specialists as many diseases are rare and thus little experience exists. Moreover, interpreting HRCT images requires knowledge of the context defined by clinical data of the studied case. A computerised diagnostic aid tool based on HRCT images with associated medical data to retrieve similar cases of ILDs from a dedicated database can bring quick and precious information for example for emergency radiologists. The experience from a pilot project highlighted the need for detailed database containing high-quality annotations in addition to clinical data. The state of the art is studied to identify requirements for image-based diagnostic aid for interstitial lung disease with secondary data integration. The data acquisition steps are detailed. The selection of the most relevant clinical parameters is done in collaboration with lung specialists from current literature, along with knowledge bases of computer-based diagnostic decision support systems. In order to perform high-quality annotations of the interstitial lung tissue in the HRCT images an annotation software and its own file format is implemented for DICOM images. A multimedia database is implemented to store ILD cases with clinical data and annotated image series. Cases from the University & University Hospitals of Geneva (HUG) are retrospectively and prospectively collected to populate the database. Currently, 59 cases with certified diagnosis and their clinical parameters are stored in the database as well as 254 image series of which 26 have their regions of interest annotated. The available data was used to test primary visual features for the classification of lung tissue patterns. These features show good discriminative properties for the separation of five classes of visual observations.
Globe Teachers Guide and Photographic Data on the Web
NASA Technical Reports Server (NTRS)
Kowal, Dan
2004-01-01
The task of managing the GLOBE Online Teacher s Guide during this time period focused on transforming the technology behind the delivery system of this document. The web application transformed from a flat file retrieval system to a dynamic database access approach. The new methodology utilizes Java Server Pages (JSP) on the front-end and an Oracle relational database on the backend. This new approach allows users of the web site, mainly teachers, to access content efficiently by grade level and/or by investigation or educational concept area. Moreover, teachers can gain easier access to data sheets and lab and field guides. The new online guide also included updated content for all GLOBE protocols. The GLOBE web management team was given documentation for maintaining the new application. Instructions for modifying the JSP templates and managing database content were included in this document. It was delivered to the team by the end of October, 2003. The National Geophysical Data Center (NGDC) continued to manage the school study site photos on the GLOBE website. 333 study site photo images were added to the GLOBE database and posted on the web during this same time period for 64 schools. Documentation for processing study site photos was also delivered to the new GLOBE web management team. Lastly, assistance was provided in transferring reference applications such as the Cloud and LandSat quizzes and Earth Systems Online Poster from NGDC servers to GLOBE servers along with documentation for maintaining these applications.
Bloch-Mouillet, E
1999-01-01
This paper aims to provide technical and practical advice about finding references using Current Contents on disk (Macintosh or PC) or via the Internet (FTP). Seven editions are published each week. They are all organized in the same way and have the same search engine. The Life Sciences edition, extensively used in medical research, is presented here in detail, as an example. This methodological note explains, in French, how to use this reference database. It is designed to be a practical guide for browsing and searching the database, and particularly for creating search profiles adapted to the needs of researchers.
Interactive Scene Analysis Module - A sensor-database fusion system for telerobotic environments
NASA Technical Reports Server (NTRS)
Cooper, Eric G.; Vazquez, Sixto L.; Goode, Plesent W.
1992-01-01
Accomplishing a task with telerobotics typically involves a combination of operator control/supervision and a 'script' of preprogrammed commands. These commands usually assume that the location of various objects in the task space conform to some internal representation (database) of that task space. The ability to quickly and accurately verify the task environment against the internal database would improve the robustness of these preprogrammed commands. In addition, the on-line initialization and maintenance of a task space database is difficult for operators using Cartesian coordinates alone. This paper describes the Interactive Scene' Analysis Module (ISAM) developed to provide taskspace database initialization and verification utilizing 3-D graphic overlay modelling, video imaging, and laser radar based range imaging. Through the fusion of taskspace database information and image sensor data, a verifiable taskspace model is generated providing location and orientation data for objects in a task space. This paper also describes applications of the ISAM in the Intelligent Systems Research Laboratory (ISRL) at NASA Langley Research Center, and discusses its performance relative to representation accuracy and operator interface efficiency.
Development of educational image databases and e-books for medical physics training.
Tabakov, S; Roberts, V C; Jonsson, B-A; Ljungberg, M; Lewis, C A; Wirestam, R; Strand, S-E; Lamm, I-L; Milano, F; Simmons, A; Deane, C; Goss, D; Aitken, V; Noel, A; Giraud, J-Y; Sherriff, S; Smith, P; Clarke, G; Almqvist, M; Jansson, T
2005-09-01
Medical physics education and training requires the use of extensive imaging material and specific explanations. These requirements provide an excellent background for application of e-Learning. The EU projects Consortia EMERALD and EMIT developed five volumes of such materials, now used in 65 countries. EMERALD developed e-Learning materials in three areas of medical physics (X-ray diagnostic radiology, nuclear medicine and radiotherapy). EMIT developed e-Learning materials in two further areas: ultrasound and magnetic resonance imaging. This paper describes the development of these e-Learning materials (consisting of e-books and educational image databases). The e-books include tasks helping studying of various equipment and methods. The text of these PDF e-books is hyperlinked with respective images. The e-books are used through the readers' own Internet browser. Each Image Database (IDB) includes a browser, which displays hundreds of images of equipment, block diagrams and graphs, image quality examples, artefacts, etc. Both the e-books and IDB are engraved on five separate CD-ROMs. Demo of these materials can be taken from www.emerald2.net.
Developing a Large Lexical Database for Information Retrieval, Parsing, and Text Generation Systems.
ERIC Educational Resources Information Center
Conlon, Sumali Pin-Ngern; And Others
1993-01-01
Important characteristics of lexical databases and their applications in information retrieval and natural language processing are explained. An ongoing project using various machine-readable sources to build a lexical database is described, and detailed designs of individual entries with examples are included. (Contains 66 references.) (EAM)
Evaluation of Database Coverage: A Comparison of Two Methodologies.
ERIC Educational Resources Information Center
Tenopir, Carol
1982-01-01
Describes experiment which compared two techniques used for evaluating and comparing database coverage of a subject area, e.g., "bibliography" and "subject profile." Differences in time, cost, and results achieved are compared by applying techniques to field of volcanology using two databases, Geological Reference File and GeoArchive. Twenty…
Comprehensive Thematic T-matrix Reference Database: a 2013-2014 Update
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Zakharova, Nadezhda T.; Khlebtsov, Nikolai G.; Wriedt, Thomas; Videen, Gorden
2014-01-01
This paper is the sixth update to the comprehensive thematic database of peer-reviewedT-matrix publications initiated by us in 2004 and includes relevant publications that have appeared since 2013. It also lists several earlier publications not incorporated in the original database and previous updates.
Development and applications of the EntomopathogenID MLSA database for use in agricultural systems
USDA-ARS?s Scientific Manuscript database
The current study reports the development and application of a publicly accessible, curated database of Hypocrealean entomopathogenic fungi sequence data. The goal was to provide a platform for users to easily access sequence data from reference strains. The database can be used to accurately identi...
ERIC Educational Resources Information Center
Cotton, P. L.
1987-01-01
Defines two types of online databases: source, referring to those intended to be complete in themselves, whether full-text or abstracts; and bibliographic, meaning those that are not complete. Predictions are made about the future growth rate of these two types of databases, as well as full-text versus abstract databases. (EM)
[A systematic evaluation of application of the web-based cancer database].
Huang, Tingting; Liu, Jialin; Li, Yong; Zhang, Rui
2013-10-01
In order to support the theory and practice of the web-based cancer database development in China, we applied a systematic evaluation to assess the development condition of the web-based cancer databases at home and abroad. We performed computer-based retrieval of the Ovid-MEDLINE, Springerlink, EBSCOhost, Wiley Online Library and CNKI databases, the papers of which were published between Jan. 1995 and Dec. 2011, and retrieved the references of these papers by hand. We selected qualified papers according to the pre-established inclusion and exclusion criteria, and carried out information extraction and analysis of the papers. Eventually, searching the online database, we obtained 1244 papers, and checking the reference lists, we found other 19 articles. Thirty-one articles met the inclusion and exclusion criteria and we extracted the proofs and assessed them. Analyzing these evidences showed that the U.S.A. counted for 26% in the first place. Thirty-nine percent of these web-based cancer databases are comprehensive cancer databases. As for single cancer databases, breast cancer and prostatic cancer are on the top, both counting for 10% respectively. Thirty-two percent of the cancer database are associated with cancer gene information. For the technical applications, MySQL and PHP applied most widely, nearly 23% each.
Shrub Abundance Mapping in Arctic Tundra with Misr
NASA Astrophysics Data System (ADS)
Duchesne, R.; Chopping, M. J.; Wang, Z.; Schaaf, C.; Tape, K. D.
2013-12-01
Over the last 60 years an increase in shrub abundance has been observed in the Arctic tundra in connection with a rapid surface warming trend. Rapid shrub expansion may have consequences in terms of ecosystem structure and function, albedo, and feedbacks to climate; however, its rate is not yet known. The goal of this research effort is thus to map large scale changes in Arctic tundra vegetation by exploiting the structural signal in moderate resolution satellite remote sensing images from NASA's Multiangle Imaging SpectroRadiometer (MISR), mapped onto a 250m Albers Conic Equal Area grid. We present here large area shrub mapping supported by reference data collated using extensive field inventory data and high resolution panchromatic imagery. MISR Level 1B2 Terrain radiance scenes from the Terra satellite from 15 June-31 July, 2000 - 2010 were converted to surface bidirectional reflectance factors (BRF) using MISR Toolkit routines and the MISR 1 km LAND product BRFs. The red band data in all available cameras were used to invert the RossThick-LiSparse-Reciprocal BRDF model to retrieve kernel weights, model-fitting RMSE, and Weights of Determination. The reference database was constructed using aerial survey, three field campaigns (field inventory for shrub count, cover, mean radius and height), and high resolution imagery. Tall shrub number, mean crown radius, cover, and mean height estimates were obtained from QuickBird and GeoEye panchromatic image chips using the CANAPI algorithm, and calibrated using field-based estimates, thus extending the database to over eight hundred locations. Tall shrub fractional cover maps for the North Slope of Alaska were constructed using the bootstrap forest machine learning algorithm that exploits the surface information provided by MISR. The reference database was divided into two datasets for training and validation. The model derived used a set of 19 independent variables(the three kernel weights, ratios and interaction terms; white and black sky albedos; and blue, green, red, and NIR nadir camera BRFs), to grow a forest of decision trees. The final estimate is the average of the predicted values from each tree. Observations not used in constructing the trees were used in validation. The model was applied with a large volume of MISR data and the resulting fractional cover estimates were combined into annual maps using a compositing algorithm that flags results affected by cloud, cloud shadow, surface water, extreme outliers, topographic shading, and burned areas. The maps show that shrub cover is lower on the north slope in comparison to southern part, as expected, however, a preliminary assessment of the fractional cover change over the last decade, achieved by averaging fractional cover values for 2000-2002 and 2008-2010 and then calculating the change between the two periods, revealed that there are large areas for which we cannot determine the sign of the change with high confidence, as the precision of our estimate is close to the magnitude of the cover values. Additional research is thus required to reliably map shrub cover in this environment at annual intervals.
A digital library for medical imaging activities
NASA Astrophysics Data System (ADS)
dos Santos, Marcelo; Furuie, Sérgio S.
2007-03-01
This work presents the development of an electronic infrastructure to make available a free, online, multipurpose and multimodality medical image database. The proposed infrastructure implements a distributed architecture for medical image database, authoring tools, and a repository for multimedia documents. Also it includes a peer-reviewed model that assures quality of dataset. This public repository provides a single point of access for medical images and related information to facilitate retrieval tasks. The proposed approach has been used as an electronic teaching system in Radiology as well.
Towards a Selenographic Information System: Apollo 15 Mission Digitization
NASA Astrophysics Data System (ADS)
Votava, J. E.; Petro, N. E.
2012-12-01
The Apollo missions represent some of the most technically complex and extensively documented explorations ever endeavored by mankind. The surface experiments performed and the lunar samples collected in-situ have helped form our understanding of the Moon's geologic history and the history of our Solar System. Unfortunately, a complication exists in the analysis and accessibility of these large volumes of lunar data and historical Apollo Era documents due to their multiple formats and disconnected web and print locations. Described here is a project to modernize, spatially reference, and link the lunar data into a comprehensive SELENOGRAPHIC INFORMATION SYSTEM, starting with the Apollo 15 mission. Like its terrestrial counter-parts, Geographic Information System (GIS) programs, such as ArcGIS, allow for easy integration, access, analysis, and display of large amounts of spatially-related data. Documentation in this new database includes surface photographs, panoramas, samples and their laboratory studies (major element and rare earth element weight percents), planned and actual vehicle traverses, and field notes. Using high-resolution (<0.25 m/pixel) images from the Lunar Reconnaissance Orbiter Camera (LROC) the rover (LRV) tracks and astronaut surface activities, along with field sketches from the Apollo 15 Preliminary Science Report (Swann, 1972), were digitized and mapped in ArcMap. Point features were created for each documented sample within the Lunar Sample Compendium (Meyer, 2010) and hyperlinked to the appropriate Compendium file (.PDF) at the stable archive site: http://curator.jsc.nasa.gov/lunar/compendium.cfm. Historical Apollo Era photographs and assembled panoramas were included as point features at each station that have been hyperlinked to the Apollo Lunar Surface Journal (ALSJ) online image library. The database has been set up to allow for the easy display of spatial variation of select attributes between samples. Attributes of interest that have data from the Compendium added directly into the database include age (Ga), mass, texture, major oxide elements (weight %), and Th and U (ppm). This project will produce an easily accessible and linked database that can offer technical and scientific information in its spatial context. While it is not possible given the enormous amounts of data, and the small allotment of time, to enter and/or link every detail to its map layer, the links that have been made here direct the user to rich, stable archive websites and web-based databases that are easy to navigate. While this project only created a product for the Apollo 15 mission, it is the model for spatially-referencing the other Apollo missions. Such a comprehensive lunar surface-activities database, a Selenographic Information System, will likely prove invaluable for future lunar studies. References: Meyer, C. (2010), The lunar sample compendium, June 2012 to August 2012, http://curator.jsc.nasa.gov/lunar/compendium.cfm, Astromaterials Res. & Exploration Sci., NASA L. B. Johnson Space Cent., Houston, TX. Swann, G. A. (1972), Preliminary geologic investigation of the Apollo 15 landing site, in Apollo 15 Preliminary Science Report, [NASA SP-289], pp. 5-1 - 5-112, NASA Manned Spacecraft Cent., Washington, D.C.
NASA Astrophysics Data System's New Data
NASA Astrophysics Data System (ADS)
Eichhorn, G.; Accomazzi, A.; Demleitner, M.; Grant, C. S.; Kurtz, M. J.; Murray, S. S.
2000-05-01
The NASA Astrophysics Data System has greatly increased its data holdings. The Physics database now contains almost 900,000 references and the Astronomy database almost 550,000 references. The Instrumentation database has almost 600,000 references. The scanned articles in the ADS Article Service are increasing in number continuously. Almost 1 million pages have been scanned so far. Recently the abstracts books from the Lunar and Planetary Science Conference have been scanned and put on-line. The Monthly Notices of the Royal Astronomical Society are currently being scanned back to Volume 1. This is the last major journal to be completely scanned and on-line. In cooperation with a conservation project of the Harvard libraries, microfilms of historical observatory literature are currently being scanned. This will provide access to an important part of the historical literature. The ADS can be accessed at: http://adswww.harvard.edu This project is funded by NASA under grant NCC5-189.
Plant Reactome: a resource for plant pathways and comparative analysis.
Naithani, Sushma; Preece, Justin; D'Eustachio, Peter; Gupta, Parul; Amarasinghe, Vindhya; Dharmawardhana, Palitha D; Wu, Guanming; Fabregat, Antonio; Elser, Justin L; Weiser, Joel; Keays, Maria; Fuentes, Alfonso Munoz-Pomer; Petryszak, Robert; Stein, Lincoln D; Ware, Doreen; Jaiswal, Pankaj
2017-01-04
Plant Reactome (http://plantreactome.gramene.org/) is a free, open-source, curated plant pathway database portal, provided as part of the Gramene project. The database provides intuitive bioinformatics tools for the visualization, analysis and interpretation of pathway knowledge to support genome annotation, genome analysis, modeling, systems biology, basic research and education. Plant Reactome employs the structural framework of a plant cell to show metabolic, transport, genetic, developmental and signaling pathways. We manually curate molecular details of pathways in these domains for reference species Oryza sativa (rice) supported by published literature and annotation of well-characterized genes. Two hundred twenty-two rice pathways, 1025 reactions associated with 1173 proteins, 907 small molecules and 256 literature references have been curated to date. These reference annotations were used to project pathways for 62 model, crop and evolutionarily significant plant species based on gene homology. Database users can search and browse various components of the database, visualize curated baseline expression of pathway-associated genes provided by the Expression Atlas and upload and analyze their Omics datasets. The database also offers data access via Application Programming Interfaces (APIs) and in various standardized pathway formats, such as SBML and BioPAX. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.