Sample records for include text images

  1. A New Pivoting and Iterative Text Detection Algorithm for Biomedical Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Songhua; Krauthammer, Prof. Michael

    2010-01-01

    There is interest to expand the reach of literature mining to include the analysis of biomedical images, which often contain a paper's key findings. Examples include recent studies that use Optical Character Recognition (OCR) to extract image text, which is used to boost biomedical image retrieval and classification. Such studies rely on the robust identification of text elements in biomedical images, which is a non-trivial task. In this work, we introduce a new text detection algorithm for biomedical images based on iterative projection histograms. We study the effectiveness of our algorithm by evaluating the performance on a set of manuallymore » labeled random biomedical images, and compare the performance against other state-of-the-art text detection algorithms. We demonstrate that our projection histogram-based text detection approach is well suited for text detection in biomedical images, and that the iterative application of the algorithm boosts performance to an F score of .60. We provide a C++ implementation of our algorithm freely available for academic use.« less

  2. A new pivoting and iterative text detection algorithm for biomedical images.

    PubMed

    Xu, Songhua; Krauthammer, Michael

    2010-12-01

    There is interest to expand the reach of literature mining to include the analysis of biomedical images, which often contain a paper's key findings. Examples include recent studies that use Optical Character Recognition (OCR) to extract image text, which is used to boost biomedical image retrieval and classification. Such studies rely on the robust identification of text elements in biomedical images, which is a non-trivial task. In this work, we introduce a new text detection algorithm for biomedical images based on iterative projection histograms. We study the effectiveness of our algorithm by evaluating the performance on a set of manually labeled random biomedical images, and compare the performance against other state-of-the-art text detection algorithms. We demonstrate that our projection histogram-based text detection approach is well suited for text detection in biomedical images, and that the iterative application of the algorithm boosts performance to an F score of .60. We provide a C++ implementation of our algorithm freely available for academic use. Copyright © 2010 Elsevier Inc. All rights reserved.

  3. A New Pivoting and Iterative Text Detection Algorithm for Biomedical Images

    PubMed Central

    Xu, Songhua; Krauthammer, Michael

    2010-01-01

    There is interest to expand the reach of literature mining to include the analysis of biomedical images, which often contain a paper’s key findings. Examples include recent studies that use Optical Character Recognition (OCR) to extract image text, which is used to boost biomedical image retrieval and classification. Such studies rely on the robust identification of text elements in biomedical images, which is a non-trivial task. In this work, we introduce a new text detection algorithm for biomedical images based on iterative projection histograms. We study the effectiveness of our algorithm by evaluating the performance on a set of manually labeled random biomedical images, and compare the performance against other state-of-the-art text detection algorithms. In this paper, we demonstrate that a projection histogram-based text detection approach is well suited for text detection in biomedical images, with a performance of F score of .60. The approach performs better than comparable approaches for text detection. Further, we show that the iterative application of the algorithm is boosting overall detection performance. A C++ implementation of our algorithm is freely available through email request for academic use. PMID:20887803

  4. Semi-automatic object geometry estimation for image personalization

    NASA Astrophysics Data System (ADS)

    Ding, Hengzhou; Bala, Raja; Fan, Zhigang; Eschbach, Reiner; Bouman, Charles A.; Allebach, Jan P.

    2010-01-01

    Digital printing brings about a host of benefits, one of which is the ability to create short runs of variable, customized content. One form of customization that is receiving much attention lately is in photofinishing applications, whereby personalized calendars, greeting cards, and photo books are created by inserting text strings into images. It is particularly interesting to estimate the underlying geometry of the surface and incorporate the text into the image content in an intelligent and natural way. Current solutions either allow fixed text insertion schemes into preprocessed images, or provide manual text insertion tools that are time consuming and aimed only at the high-end graphic designer. It would thus be desirable to provide some level of automation in the image personalization process. We propose a semi-automatic image personalization workflow which includes two scenarios: text insertion and text replacement. In both scenarios, the underlying surfaces are assumed to be planar. A 3-D pinhole camera model is used for rendering text, whose parameters are estimated by analyzing existing structures in the image. Techniques in image processing and computer vison such as the Hough transform, the bilateral filter, and connected component analysis are combined, along with necessary user inputs. In particular, the semi-automatic workflow is implemented as an image personalization tool, which is presented in our companion paper.1 Experimental results including personalized images for both scenarios are shown, which demonstrate the effectiveness of our algorithms.

  5. Use for Teachers and Students | Galaxy of Images

    Science.gov Websites

    the website. Some Frequently Asked Questions by Students and Teachers May I put unaltered images, text Libraries (http://www.sil.si.edu). May I put unaltered images, text or content from this website on my should include a link back to Smithsonian Libraries (http://www.sil.si.edu). May I put images, text or

  6. Automated segmentation of geographic atrophy in fundus autofluorescence images using supervised pixel classification.

    PubMed

    Hu, Zhihong; Medioni, Gerard G; Hernandez, Matthias; Sadda, Srinivas R

    2015-01-01

    Geographic atrophy (GA) is a manifestation of the advanced or late stage of age-related macular degeneration (AMD). AMD is the leading cause of blindness in people over the age of 65 in the western world. The purpose of this study is to develop a fully automated supervised pixel classification approach for segmenting GA, including uni- and multifocal patches in fundus autofluorescene (FAF) images. The image features include region-wise intensity measures, gray-level co-occurrence matrix measures, and Gaussian filter banks. A [Formula: see text]-nearest-neighbor pixel classifier is applied to obtain a GA probability map, representing the likelihood that the image pixel belongs to GA. Sixteen randomly chosen FAF images were obtained from 16 subjects with GA. The algorithm-defined GA regions are compared with manual delineation performed by a certified image reading center grader. Eight-fold cross-validation is applied to evaluate the algorithm performance. The mean overlap ratio (OR), area correlation (Pearson's [Formula: see text]), accuracy (ACC), true positive rate (TPR), specificity (SPC), positive predictive value (PPV), and false discovery rate (FDR) between the algorithm- and manually defined GA regions are [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text], respectively.

  7. Integrated clinical workstations for image and text data capture, display, and teleconsultation.

    PubMed

    Dayhoff, R; Kuzmak, P M; Kirin, G

    1994-01-01

    The Department of Veterans Affairs (VA) DHCP Imaging System digitally records clinically significant diagnostic images selected by medical specialists in a variety of hospital departments, including radiology, cardiology, gastroenterology, pathology, dermatology, hematology, surgery, podiatry, dental clinic, and emergency room. These images, which include true color and gray scale images, scanned documents, and electrocardiogram waveforms, are stored on network file servers and displayed on workstations located throughout a medical center. All images are managed by the VA's hospital information system (HIS), allowing integrated displays of text and image data from all medical specialties. Two VA medical centers currently have DHCP Imaging Systems installed, and other installations are underway.

  8. A content analysis of thinspiration images and text posts on Tumblr.

    PubMed

    Wick, Madeline R; Harriger, Jennifer A

    2018-03-01

    Thinspiration is content advocating extreme weight loss by means of images and/or text posts. While past content analyses have examined thinspiration content on social media and other websites, no research to date has examined thinspiration content on Tumblr. Over the course of a week, 222 images and text posts were collected after entering the keyword 'thinspiration' into the Tumblr search bar. These images were then rated on a variety of characteristics. The majority of thinspiration images included a thin woman adhering to culturally based beauty, often posing in a manner that accentuated her thinness or sexuality. The most common themes for thinspiration text posts included dieting/restraint, weight loss, food guilt, and body guilt. The thinspiration content on Tumblr appears to be consistent with that on other mediums. Future research should utilize experimental methods to examine the potential effects of consuming thinspiration content on Tumblr. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Integrated clinical workstations for image and text data capture, display, and teleconsultation.

    PubMed Central

    Dayhoff, R.; Kuzmak, P. M.; Kirin, G.

    1994-01-01

    The Department of Veterans Affairs (VA) DHCP Imaging System digitally records clinically significant diagnostic images selected by medical specialists in a variety of hospital departments, including radiology, cardiology, gastroenterology, pathology, dermatology, hematology, surgery, podiatry, dental clinic, and emergency room. These images, which include true color and gray scale images, scanned documents, and electrocardiogram waveforms, are stored on network file servers and displayed on workstations located throughout a medical center. All images are managed by the VA's hospital information system (HIS), allowing integrated displays of text and image data from all medical specialties. Two VA medical centers currently have DHCP Imaging Systems installed, and other installations are underway. PMID:7949899

  10. An Adaptive Inpainting Algorithm Based on DCT Induced Wavelet Regularization

    DTIC Science & Technology

    2013-01-01

    research in image processing. Applications of image inpainting include old films restoration, video inpainting [4], de -interlacing of video sequences...show 5 (a) (b) (c) (d) (e) (f) Fig. 1. Performance of various inpainting algorithms for a cartoon image with text. (a) the original test image; (b...the test image with text; inpainted images by (c) SF (PSNR=37.38 dB); (d) SF-LDCT (PSNR=37.37 dB); (e) MCA (PSNR=37.04 dB); and (f) the proposed

  11. Moving Multimedia: The Information Value in Images.

    ERIC Educational Resources Information Center

    Berinstein, Paula

    1997-01-01

    Discusses the value and use of images as information. Topics include the information in images versus text; a taxonomy of image types; resources related to images; and the use of images in architecture, engineering, advertising, and competitive intelligence. (LRW)

  12. An automatic system to detect and extract texts in medical images for de-identification

    NASA Astrophysics Data System (ADS)

    Zhu, Yingxuan; Singh, P. D.; Siddiqui, Khan; Gillam, Michael

    2010-03-01

    Recently, there is an increasing need to share medical images for research purpose. In order to respect and preserve patient privacy, most of the medical images are de-identified with protected health information (PHI) before research sharing. Since manual de-identification is time-consuming and tedious, so an automatic de-identification system is necessary and helpful for the doctors to remove text from medical images. A lot of papers have been written about algorithms of text detection and extraction, however, little has been applied to de-identification of medical images. Since the de-identification system is designed for end-users, it should be effective, accurate and fast. This paper proposes an automatic system to detect and extract text from medical images for de-identification purposes, while keeping the anatomic structures intact. First, considering the text have a remarkable contrast with the background, a region variance based algorithm is used to detect the text regions. In post processing, geometric constraints are applied to the detected text regions to eliminate over-segmentation, e.g., lines and anatomic structures. After that, a region based level set method is used to extract text from the detected text regions. A GUI for the prototype application of the text detection and extraction system is implemented, which shows that our method can detect most of the text in the images. Experimental results validate that our method can detect and extract text in medical images with a 99% recall rate. Future research of this system includes algorithm improvement, performance evaluation, and computation optimization.

  13. Multispectral processing of combined visible and x-ray fluorescence imagery in the Archimedes palimpsest

    NASA Astrophysics Data System (ADS)

    Walvoord, Derek; Bright, Allison; Easton, Roger L., Jr.

    2008-02-01

    The Archimedes palimpsest is one of the most significant early texts in the history of science that has survived to the present day. It includes the oldest known copies of text from seven treatises by Archimedes, along with pages from other important historical writings. In the 13th century, the original texts were erased and overwritten by a Christian prayer book, which was used in religious services probably into the 19th century. Since 2001, much of the text from treatises of Archimedes has been transcribed from images taken in reflected visible light and visible fluorescence generated by exposure of the parchment to ultraviolet light. However, these techniques do not work well on all pages of the manuscript, including the badly stained colophon, four pages of the manuscript obscured by icons painted during the first half of the 20th century, and some pages of non-Archimedes texts. Much of the text on the colophon and overpainted pages has been recovered from X-ray fluorescence (XRF) imagery. In this work, the XRF images of one of the other pages were combined with the bands of optical images to create hyperspectral image cubes and processed using standard statistical classification techniques developed for environmental remote sensing to test if this improved the recovery of the original text.

  14. MSL: Facilitating automatic and physical analysis of published scientific literature in PDF format.

    PubMed

    Ahmed, Zeeshan; Dandekar, Thomas

    2015-01-01

    Published scientific literature contains millions of figures, including information about the results obtained from different scientific experiments e.g. PCR-ELISA data, microarray analysis, gel electrophoresis, mass spectrometry data, DNA/RNA sequencing, diagnostic imaging (CT/MRI and ultrasound scans), and medicinal imaging like electroencephalography (EEG), magnetoencephalography (MEG), echocardiography  (ECG), positron-emission tomography (PET) images. The importance of biomedical figures has been widely recognized in scientific and medicine communities, as they play a vital role in providing major original data, experimental and computational results in concise form. One major challenge for implementing a system for scientific literature analysis is extracting and analyzing text and figures from published PDF files by physical and logical document analysis. Here we present a product line architecture based bioinformatics tool 'Mining Scientific Literature (MSL)', which supports the extraction of text and images by interpreting all kinds of published PDF files using advanced data mining and image processing techniques. It provides modules for the marginalization of extracted text based on different coordinates and keywords, visualization of extracted figures and extraction of embedded text from all kinds of biological and biomedical figures using applied Optimal Character Recognition (OCR). Moreover, for further analysis and usage, it generates the system's output in different formats including text, PDF, XML and images files. Hence, MSL is an easy to install and use analysis tool to interpret published scientific literature in PDF format.

  15. A mobile phone food record app to digitally capture dietary intake for adolescents in a free-living environment: usability study.

    PubMed

    Casperson, Shanon L; Sieling, Jared; Moon, Jon; Johnson, LuAnn; Roemmich, James N; Whigham, Leah

    2015-03-13

    Mobile technologies are emerging as valuable tools to collect and assess dietary intake. Adolescents readily accept and adopt new technologies; thus, a food record app (FRapp) may be a useful tool to better understand adolescents' dietary intake and eating patterns. We sought to determine the amenability of adolescents, in a free-living environment with minimal parental input, to use the FRapp to record their dietary intake. Eighteen community-dwelling adolescents (11-14 years) received detailed instructions to record their dietary intake for 3-7 days using the FRapp. Participants were instructed to capture before and after images of all foods and beverages consumed and to include a fiducial marker in the image. Participants were also asked to provide text descriptors including amount and type of all foods and beverages consumed. Eight of 18 participants were able to follow all instructions: included pre- and post-meal images, a fiducial marker, and a text descriptor and collected diet records on 2 weekdays and 1 weekend day. Dietary intake was recorded on average for 3.2 (SD 1.3 days; 68% weekdays and 32% weekend days) with an average of 2.2 (SD 1.1) eating events per day per participant. A total of 143 eating events were recorded, of which 109 had at least one associated image and 34 were recorded with text only. Of the 109 eating events with images, 66 included all foods, beverages and a fiducial marker and 44 included both a pre- and post-meal image. Text was included with 78 of the captured images. Of the meals recorded, 36, 33, 35, and 39 were breakfasts, lunches, dinners, and snacks, respectively. These data suggest that mobile devices equipped with an app to record dietary intake will be used by adolescents in a free-living environment; however, a minority of participants followed all directions. User-friendly mobile food record apps may increase participant amenability, increasing our understanding of adolescent dietary intake and eating patterns. To improve data collection, the FRapp should deliver prompts for tasks, such as capturing images before and after each eating event, including the fiducial marker in the image, providing complete and accurate text information, and ensuring all eating events are recorded and should be customizable to individuals and to different situations. Clinicaltrials.gov NCT01803997. http://clinicaltrials.gov/ct2/show/NCT01803997 (Archived at: http://www.webcitation.org/6WiV1vxoR).

  16. Recruiting Young Gay and Bisexual Men for a Human Papillomavirus Vaccination Intervention Through Social Media: The Effects of Advertisement Content

    PubMed Central

    Katz, Mira L; Bauermeister, Jose A; Shoben, Abigail B; Paskett, Electra D; McRee, Annie-Laurie

    2017-01-01

    Background Web-based approaches, specifically social media sites, represent a promising approach for recruiting young gay and bisexual men for research studies. Little is known, however, about how the performance of social media advertisements (ads) used to recruit this population is affected by ad content (ie, image and text). Objective The aim of this study was to evaluate the effects of different images and text included in social media ads used to recruit young gay and bisexual men for the pilot test of a Web-based human papillomavirus (HPV) vaccination intervention. Methods In July and September 2016, we used paid Facebook advertisements to recruit men who were aged 18-25 years, self-identified as gay or bisexual, US resident, and had not received HPV vaccine. A 4x2x2 factorial experiment varied ad image (a single young adult male, a young adult male couple, a group of young adult men, or a young adult male talking to a doctor), content focus (text mentioning HPV or HPV vaccine), and disease framing (text mentioning cancer or a sexually transmitted disease [STD]). Poisson regression determined whether these experimental factors affected ad performance. Results The recruitment campaign reached a total of 35,646 users who viewed ads for 36,395 times. This resulted in an overall unique click-through rate of 2.01% (717/35,646) and an overall conversion rate of 0.66% (241/36,395). Reach was higher for ads that included an image of a couple (incidence rate ratio, IRR=4.91, 95% CI 2.68-8.97, P<.001) or a group (IRR=2.65, 95% CI 1.08-6.50, P=.03) compared with those that included an image of a single person. Ads that included an image of a couple also had a higher conversion rate (IRR=2.56, 95% CI 1.13-5.77, P=.02) than ads that included an image of a single person. Ads with text mentioning an STD had a higher unique click-through rate compared with ads with text mentioning cancer (IRR=1.34, 95% CI 1.06-1.69, P=.01). The campaign cost a total of US $413.72 and resulted in 150 eligible and enrolled individuals (US $2.76 per enrolled participant). Conclusions Facebook ads are a convenient and cost-efficient strategy for reaching and recruiting young gay and bisexual men for a Web-based HPV vaccination intervention. To help optimize ad performance among this population, researchers should consider the importance of the text and image included in the social media recruitment ads. PMID:28576758

  17. Providers' Access of Imaging Versus Only Reports: A System Log File Analysis.

    PubMed

    Jung, Hye-Young; Gichoya, Judy Wawira; Vest, Joshua R

    2017-02-01

    An increasing number of technologies allow providers to access the results of imaging studies. This study examined differences in access of radiology images compared with text-only reports through a health information exchange system by health care professionals. The study sample included 157,256 historical sessions from a health information exchange system that enabled 1,670 physicians and non-physicians to access text-based reports and imaging over the period 2013 to 2014. The primary outcome was an indicator of access of an imaging study instead of access of a text-only report. Multilevel mixed-effects regression models were used to estimate the association between provider and session characteristics and access of images compared with text-only reports. Compared with primary care physicians, specialists had an 18% higher probability of accessing actual images instead of text-only reports (β = 0.18; P < .001). Compared with primary care practice settings, the probability of accessing images was 4% higher for specialty care practices (P < .05) and 8% lower for emergency departments (P < .05). Radiologists, orthopedists, and neurologists accounted for 79% of all the sessions with actual images accessed. Orthopedists, radiologists, surgeons, and pulmonary disease specialists accessed imaging more often than text-based reports only. Consideration for differences in the need to access images compared with text-only reports based on the type of provider and setting of care are needed to maximize the benefits of image sharing for patient care. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  18. Vaping on Instagram: cloud chasing, hand checks and product placement

    PubMed Central

    Chu, Kar-Hai; Allem, Jon-Patrick; Cruz, Tess Boley; Unger, Jennifer B

    2016-01-01

    Introduction This study documented images posted on Instagram of electronic cigarettes (e-cigarette) and vaping (activity associated with e-cigarette use). Although e-cigarettes have been studied on Twitter, few studies have focused on Instagram, despite having 500 million users. Instagram’s emphasis on images warranted investigation of e-cigarettes, as past tobacco industry strategies demonstrated that images could be used to mislead in advertisements, or normalise tobacco-related behaviours. Findings should prove informative to tobacco control policies in the future. Methods 3 months of publicly available data were collected from Instagram, including images and associated metadata (n=2208). Themes of images were classified as (1) activity, for example, a person blowing vapour; (2) product, for example, a personal photo of an e-cigarette device; (3) advertisement; (4) text, for example, ‘meme’ or image containing mostly text and (5) other. User endorsement (likes) of each type of image was recorded. Caption text was analysed to explore different trends in vaping and e-cigarette-related text. Results Analyses found that advertisement-themed images were most common (29%), followed by product (28%), and activity (18%). Likes were more likely to accompany activity and product-themed images compared with advertisement or text-themed images (p<0.01). Vaping-related text greatly outnumbered e-cigarette-related text in the image captions. Conclusions Instagram affords its users the ability to post images of e-cigarette-related behaviours and gives advertisers the opportunity to display their product. Future research should incorporate novel data streams to improve public health surveillance, survey development and educational campaigns. PMID:27660111

  19. Twelve tips for creating trigger images for problem-based learning cases.

    PubMed

    Azer, Samy A

    2007-03-01

    A trigger is the starting point of problem-based learning (PBL) cases. It is usually in the form of 5-6 text lines that provide the key information about the main character (usually the patient), including 3-4 of patient's presenting problems. In addition to the trigger text, most programs using PBL include a visual trigger. This might be in the form of a single image, a series of images, a video clip, a cartoon, or even one of the patient's investigation results (e.g. chest X-ray, pathology report, or urine sample analysis). The main educational objectives of the trigger image are as follows: (1) to introduce the patient to the students; (2) to enhance students' observation skills; (3) to provide them with new information to add to the cues obtained from the trigger text; and (4) to stimulate students to ask questions as they develop their enquiry plan. When planned and delivered effectively, trigger images should be engaging and stimulate group discussion. Understanding the educational objectives of using trigger images and choosing appropriate images are the keys for constructing successful PBL cases. These twelve tips highlight the key steps in the successful creation of trigger images.

  20. Automated extraction of radiation dose information from CT dose report images.

    PubMed

    Li, Xinhua; Zhang, Da; Liu, Bob

    2011-06-01

    The purpose of this article is to describe the development of an automated tool for retrieving texts from CT dose report images. Optical character recognition was adopted to perform text recognitions of CT dose report images. The developed tool is able to automate the process of analyzing multiple CT examinations, including text recognition, parsing, error correction, and exporting data to spreadsheets. The results were precise for total dose-length product (DLP) and were about 95% accurate for CT dose index and DLP of scanned series.

  1. Image display device in digital TV

    DOEpatents

    Choi, Seung Jong [Seoul, KR

    2006-07-18

    Disclosed is an image display device in a digital TV that is capable of carrying out the conversion into various kinds of resolution by using single bit map data in the digital TV. The image display device includes: a data processing part for executing bit map conversion, compression, restoration and format-conversion for text data; a memory for storing the bit map data obtained according to the bit map conversion and compression in the data processing part and image data inputted from an arbitrary receiving part, the receiving part receiving one of digital image data and analog image data; an image outputting part for reading the image data from the memory; and a display processing part for mixing the image data read from the image outputting part and the bit map data converted in format from the a data processing part. Therefore, the image display device according to the present invention can convert text data in such a manner as to correspond with various resolution, carry out the compression for bit map data, thereby reducing the memory space, and support text data of an HTML format, thereby providing the image with the text data of various shapes.

  2. Image-based mobile service: automatic text extraction and translation

    NASA Astrophysics Data System (ADS)

    Berclaz, Jérôme; Bhatti, Nina; Simske, Steven J.; Schettino, John C.

    2010-01-01

    We present a new mobile service for the translation of text from images taken by consumer-grade cell-phone cameras. Such capability represents a new paradigm for users where a simple image provides the basis for a service. The ubiquity and ease of use of cell-phone cameras enables acquisition and transmission of images anywhere and at any time a user wishes, delivering rapid and accurate translation over the phone's MMS and SMS facilities. Target text is extracted completely automatically, requiring no bounding box delineation or related user intervention. The service uses localization, binarization, text deskewing, and optical character recognition (OCR) in its analysis. Once the text is translated, an SMS message is sent to the user with the result. Further novelties include that no software installation is required on the handset, any service provider or camera phone can be used, and the entire service is implemented on the server side.

  3. Vaping on Instagram: cloud chasing, hand checks and product placement.

    PubMed

    Chu, Kar-Hai; Allem, Jon-Patrick; Cruz, Tess Boley; Unger, Jennifer B

    2016-09-01

    This study documented images posted on Instagram of electronic cigarettes (e-cigarette) and vaping (activity associated with e-cigarette use). Although e-cigarettes have been studied on Twitter, few studies have focused on Instagram, despite having 500 million users. Instagram's emphasis on images warranted investigation of e-cigarettes, as past tobacco industry strategies demonstrated that images could be used to mislead in advertisements, or normalise tobacco-related behaviours. Findings should prove informative to tobacco control policies in the future. 3 months of publicly available data were collected from Instagram, including images and associated metadata (n=2208). Themes of images were classified as (1) activity , for example, a person blowing vapour; (2) product , for example, a personal photo of an e-cigarette device; (3) advertisement ; (4) text , for example, 'meme' or image containing mostly text and (5) other . User endorsement (likes) of each type of image was recorded. Caption text was analysed to explore different trends in vaping and e-cigarette-related text. Analyses found that advertisement -themed images were most common (29%), followed by product (28%), and activity (18%). Likes were more likely to accompany activity and product -themed images compared with advertisement or text -themed images (p<0.01). Vaping-related text greatly outnumbered e-cigarette-related text in the image captions. Instagram affords its users the ability to post images of e-cigarette-related behaviours and gives advertisers the opportunity to display their product. Future research should incorporate novel data streams to improve public health surveillance, survey development and educational campaigns. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  4. MSL: Facilitating automatic and physical analysis of published scientific literature in PDF format

    PubMed Central

    Ahmed, Zeeshan; Dandekar, Thomas

    2018-01-01

    Published scientific literature contains millions of figures, including information about the results obtained from different scientific experiments e.g. PCR-ELISA data, microarray analysis, gel electrophoresis, mass spectrometry data, DNA/RNA sequencing, diagnostic imaging (CT/MRI and ultrasound scans), and medicinal imaging like electroencephalography (EEG), magnetoencephalography (MEG), echocardiography  (ECG), positron-emission tomography (PET) images. The importance of biomedical figures has been widely recognized in scientific and medicine communities, as they play a vital role in providing major original data, experimental and computational results in concise form. One major challenge for implementing a system for scientific literature analysis is extracting and analyzing text and figures from published PDF files by physical and logical document analysis. Here we present a product line architecture based bioinformatics tool ‘Mining Scientific Literature (MSL)’, which supports the extraction of text and images by interpreting all kinds of published PDF files using advanced data mining and image processing techniques. It provides modules for the marginalization of extracted text based on different coordinates and keywords, visualization of extracted figures and extraction of embedded text from all kinds of biological and biomedical figures using applied Optimal Character Recognition (OCR). Moreover, for further analysis and usage, it generates the system’s output in different formats including text, PDF, XML and images files. Hence, MSL is an easy to install and use analysis tool to interpret published scientific literature in PDF format. PMID:29721305

  5. VA's Integrated Imaging System on three platforms.

    PubMed

    Dayhoff, R E; Maloney, D L; Majurski, W J

    1992-01-01

    The DHCP Integrated Imaging System provides users with integrated patient data including text, image and graphics data. This system has been transferred from its original two screen DOS-based MUMPS platform to an X window workstation and a Microsoft Windows-based workstation. There are differences between these various platforms that impact on software design and on software development strategy. Data structures and conventions were used to isolate hardware, operating system, imaging software, and user-interface differences between platforms in the implementation of functionality for text and image display and interaction. The use of an object-oriented approach greatly increased system portability.

  6. VA's Integrated Imaging System on three platforms.

    PubMed Central

    Dayhoff, R. E.; Maloney, D. L.; Majurski, W. J.

    1992-01-01

    The DHCP Integrated Imaging System provides users with integrated patient data including text, image and graphics data. This system has been transferred from its original two screen DOS-based MUMPS platform to an X window workstation and a Microsoft Windows-based workstation. There are differences between these various platforms that impact on software design and on software development strategy. Data structures and conventions were used to isolate hardware, operating system, imaging software, and user-interface differences between platforms in the implementation of functionality for text and image display and interaction. The use of an object-oriented approach greatly increased system portability. PMID:1482983

  7. Recruiting Young Gay and Bisexual Men for a Human Papillomavirus Vaccination Intervention Through Social Media: The Effects of Advertisement Content.

    PubMed

    Reiter, Paul L; Katz, Mira L; Bauermeister, Jose A; Shoben, Abigail B; Paskett, Electra D; McRee, Annie-Laurie

    2017-06-02

    Web-based approaches, specifically social media sites, represent a promising approach for recruiting young gay and bisexual men for research studies. Little is known, however, about how the performance of social media advertisements (ads) used to recruit this population is affected by ad content (ie, image and text). The aim of this study was to evaluate the effects of different images and text included in social media ads used to recruit young gay and bisexual men for the pilot test of a Web-based human papillomavirus (HPV) vaccination intervention. In July and September 2016, we used paid Facebook advertisements to recruit men who were aged 18-25 years, self-identified as gay or bisexual, US resident, and had not received HPV vaccine. A 4x2x2 factorial experiment varied ad image (a single young adult male, a young adult male couple, a group of young adult men, or a young adult male talking to a doctor), content focus (text mentioning HPV or HPV vaccine), and disease framing (text mentioning cancer or a sexually transmitted disease [STD]). Poisson regression determined whether these experimental factors affected ad performance. The recruitment campaign reached a total of 35,646 users who viewed ads for 36,395 times. This resulted in an overall unique click-through rate of 2.01% (717/35,646) and an overall conversion rate of 0.66% (241/36,395). Reach was higher for ads that included an image of a couple (incidence rate ratio, IRR=4.91, 95% CI 2.68-8.97, P<.001) or a group (IRR=2.65, 95% CI 1.08-6.50, P=.03) compared with those that included an image of a single person. Ads that included an image of a couple also had a higher conversion rate (IRR=2.56, 95% CI 1.13-5.77, P=.02) than ads that included an image of a single person. Ads with text mentioning an STD had a higher unique click-through rate compared with ads with text mentioning cancer (IRR=1.34, 95% CI 1.06-1.69, P=.01). The campaign cost a total of US $413.72 and resulted in 150 eligible and enrolled individuals (US $2.76 per enrolled participant). Facebook ads are a convenient and cost-efficient strategy for reaching and recruiting young gay and bisexual men for a Web-based HPV vaccination intervention. To help optimize ad performance among this population, researchers should consider the importance of the text and image included in the social media recruitment ads. ©Paul L Reiter, Mira L Katz, Jose A Bauermeister, Abigail B Shoben, Electra D Paskett, Annie-Laurie McRee. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 02.06.2017.

  8. Integration of medical imaging into a multi-institutional hospital information system structure.

    PubMed

    Dayhoff, R E

    1995-01-01

    The Department of Veterans Affairs (VA) is providing integrated text and image data to its clinical users at its Washington and Baltimore medical centers and, soon, at nine other medical centers. The DHCP Imaging System records clinically significant diagnostic images selected by medical specialists in a variety of departments, including cardiology, gastroenterology, pathology, dermatology, surgery, radiology, podiatry, dentistry, and emergency medicine. These images, which include color and gray scale images, and electrocardiogram waveforms, are displayed on workstations located throughout the medical centers. Integration of clinical images with the VA's electronic mail system allows transfer of data from one medical center to another. The ability to incorporate transmitted text and image data into on-line patient records at the collaborating sites is an important aspect of professional consultation. In order to achieve the maximum benefits from an integrated patient record system, a critical mass of information must be available for clinicians. When there is also seamless support for administration, it becomes possible to re-engineer the processes involved in providing medical care.

  9. Developing a comprehensive system for content-based retrieval of image and text data from a national survey

    NASA Astrophysics Data System (ADS)

    Antani, Sameer K.; Natarajan, Mukil; Long, Jonathan L.; Long, L. Rodney; Thoma, George R.

    2005-04-01

    The article describes the status of our ongoing R&D at the U.S. National Library of Medicine (NLM) towards the development of an advanced multimedia database biomedical information system that supports content-based image retrieval (CBIR). NLM maintains a collection of 17,000 digitized spinal X-rays along with text survey data from the Second National Health and Nutritional Examination Survey (NHANES II). These data serve as a rich data source for epidemiologists and researchers of osteoarthritis and musculoskeletal diseases. It is currently possible to access these through text keyword queries using our Web-based Medical Information Retrieval System (WebMIRS). CBIR methods developed specifically for biomedical images could offer direct visual searching of these images by means of example image or user sketch. We are building a system which supports hybrid queries that have text and image-content components. R&D goals include developing algorithms for robust image segmentation for localizing and identifying relevant anatomy, labeling the segmented anatomy based on its pathology, developing suitable indexing and similarity matching methods for images and image features, and associating the survey text information for query and retrieval along with the image data. Some highlights of the system developed in MATLAB and Java are: use of a networked or local centralized database for text and image data; flexibility to incorporate new research work; provides a means to control access to system components under development; and use of XML for structured reporting. The article details the design, features, and algorithms in this third revision of this prototype system, CBIR3.

  10. Practical vision based degraded text recognition system

    NASA Astrophysics Data System (ADS)

    Mohammad, Khader; Agaian, Sos; Saleh, Hani

    2011-02-01

    Rapid growth and progress in the medical, industrial, security and technology fields means more and more consideration for the use of camera based optical character recognition (OCR) Applying OCR to scanned documents is quite mature, and there are many commercial and research products available on this topic. These products achieve acceptable recognition accuracy and reasonable processing times especially with trained software, and constrained text characteristics. Even though the application space for OCR is huge, it is quite challenging to design a single system that is capable of performing automatic OCR for text embedded in an image irrespective of the application. Challenges for OCR systems include; images are taken under natural real world conditions, Surface curvature, text orientation, font, size, lighting conditions, and noise. These and many other conditions make it extremely difficult to achieve reasonable character recognition. Performance for conventional OCR systems drops dramatically as the degradation level of the text image quality increases. In this paper, a new recognition method is proposed to recognize solid or dotted line degraded characters. The degraded text string is localized and segmented using a new algorithm. The new method was implemented and tested using a development framework system that is capable of performing OCR on camera captured images. The framework allows parameter tuning of the image-processing algorithm based on a training set of camera-captured text images. Novel methods were used for enhancement, text localization and the segmentation algorithm which enables building a custom system that is capable of performing automatic OCR which can be used for different applications. The developed framework system includes: new image enhancement, filtering, and segmentation techniques which enabled higher recognition accuracies, faster processing time, and lower energy consumption, compared with the best state of the art published techniques. The system successfully produced impressive OCR accuracies (90% -to- 93%) using customized systems generated by our development framework in two industrial OCR applications: water bottle label text recognition and concrete slab plate text recognition. The system was also trained for the Arabic language alphabet, and demonstrated extremely high recognition accuracy (99%) for Arabic license name plate text recognition with processing times of 10 seconds. The accuracy and run times of the system were compared to conventional and many states of art methods, the proposed system shows excellent results.

  11. An Open Source Agenda for Research Linking Text and Image Content Features.

    ERIC Educational Resources Information Center

    Goodrum, Abby A.; Rorvig, Mark E.; Jeong, Ki-Tai; Suresh, Chitturi

    2001-01-01

    Proposes methods to utilize image primitives to support term assignment for image classification. Proposes to release code for image analysis in a common tool set for other researchers to use. Of particular focus is the expansion of work by researchers in image indexing to include image content-based feature extraction capabilities in their work.…

  12. Recovering Ancient Inscriptions by X-ray Fluorescence Imaging

    NASA Astrophysics Data System (ADS)

    Powers, Judson; Dimitrova, Nora; Huang, Rong; Smilgies, Detlef-M.; Bilderback, Don; Clinton, Kevin; Thorne, Robert

    2006-03-01

    For many ancient cultures including those of the Mediterranean, carved stone inscriptions provide our most detailed historical record. Over the ages the surfaces of many of these inscriptions have been eroded so that the original text can no longer be distinguished. A method that allowed at least partial recovery of this lost text would provide a major breakthrough for the study of these cultures. The scope of analytical techniques that can be applied to stone tablets is limited by their large size and weight. We have applied X-ray fluorescence imaging to study the text of ancient stone inscriptions [1]. This method allows the concentrations of trace elements, including those introduced during inscription and painting, to be measured and mapped. The images created in this way correspond exactly to the published text of the inscription, both when traces of letters are visible with the naked eye and when they are barely detectable. [1] J. Powers et al., Zeitschrift für Papyrologie und Epigraphik 152: 221-227 (2005).

  13. E-Roadway Animation (Text Version) | Transportation Research | NREL

    Science.gov Websites

    E-Roadway Animation (Text Version) E-Roadway Animation (Text Version) This text version of the e overall emissions. Background images include 1) a U.S. map with text (80% overall emissions reduction by ), 3) a California map with text (80% transportation emissions reduction by 2050), and 4) a European

  14. Text Extraction from Scene Images by Character Appearance and Structure Modeling

    PubMed Central

    Yi, Chucai; Tian, Yingli

    2012-01-01

    In this paper, we propose a novel algorithm to detect text information from natural scene images. Scene text classification and detection are still open research topics. Our proposed algorithm is able to model both character appearance and structure to generate representative and discriminative text descriptors. The contributions of this paper include three aspects: 1) a new character appearance model by a structure correlation algorithm which extracts discriminative appearance features from detected interest points of character samples; 2) a new text descriptor based on structons and correlatons, which model character structure by structure differences among character samples and structure component co-occurrence; and 3) a new text region localization method by combining color decomposition, character contour refinement, and string line alignment to localize character candidates and refine detected text regions. We perform three groups of experiments to evaluate the effectiveness of our proposed algorithm, including text classification, text detection, and character identification. The evaluation results on benchmark datasets demonstrate that our algorithm achieves the state-of-the-art performance on scene text classification and detection, and significantly outperforms the existing algorithms for character identification. PMID:23316111

  15. Added Value of Selected Images Embedded Into Radiology Reports to Referring Clinicians

    PubMed Central

    Iyer, Veena R.; Hahn, Peter F.; Blaszkowsky, Lawrence S.; Thayer, Sarah P.; Halpern, Elkan F.; Harisinghani, Mukesh G.

    2011-01-01

    Purpose The aim of this study was to evaluate the added utility of embedding images for findings described in radiology text reports to referring clinicians. Methods Thirty-five cases referred for abdominal CT scans in 2007 and 2008 were included. Referring physicians were asked to view text-only reports, followed by the same reports with pertinent images embedded. For each pair of reports, a questionnaire was administered. A 5-point, Likert-type scale was used to assess if the clinical query was satisfactorily answered by the text-only report. A “yes-or-no” question was used to assess whether the report with images answered the clinical query better; a positive answer to this question generated “yes-or-no” queries to examine whether the report with images helped in making a more confident decision on management, whether it reduced time spent in forming the plan, and whether it altered management. The questionnaire asked whether a radiologist would be contacted with queries on reading the text-only report and the report with images. Results In 32 of 35 cases, the text-only reports satisfactorily answered the clinical queries. In these 32 cases, the reports with attached images helped in making more confident management decisions and reduced time in planning management. Attached images altered management in 2 cases. Radiologists would have been consulted for clarifications in 21 and 10 cases on reading the text-only reports and the reports with embedded images, respectively. Conclusions Providing relevant images with reports saves time, increases physicians' confidence in deciding treatment plans, and can alter management. PMID:20193926

  16. Efficient Text Encryption and Hiding with Double-Random Phase-Encoding

    PubMed Central

    Sang, Jun; Ling, Shenggui; Alam, Mohammad S.

    2012-01-01

    In this paper, a double-random phase-encoding technique-based text encryption and hiding method is proposed. First, the secret text is transformed into a 2-dimensional array and the higher bits of the elements in the transformed array are used to store the bit stream of the secret text, while the lower bits are filled with specific values. Then, the transformed array is encoded with double-random phase-encoding technique. Finally, the encoded array is superimposed on an expanded host image to obtain the image embedded with hidden data. The performance of the proposed technique, including the hiding capacity, the recovery accuracy of the secret text, and the quality of the image embedded with hidden data, is tested via analytical modeling and test data stream. Experimental results show that the secret text can be recovered either accurately or almost accurately, while maintaining the quality of the host image embedded with hidden data by properly selecting the method of transforming the secret text into an array and the superimposition coefficient. By using optical information processing techniques, the proposed method has been found to significantly improve the security of text information transmission, while ensuring hiding capacity at a prescribed level. PMID:23202003

  17. Storing and Viewing Electronic Documents.

    ERIC Educational Resources Information Center

    Falk, Howard

    1999-01-01

    Discusses the conversion of fragile library materials to computer storage and retrieval to extend the life of the items and to improve accessibility through the World Wide Web. Highlights include entering the images, including scanning; optical character recognition; full text and manual indexing; and available document- and image-management…

  18. A client/server system for Internet access to biomedical text/image databanks.

    PubMed

    Thoma, G R; Long, L R; Berman, L E

    1996-01-01

    Internet access to mixed text/image databanks is finding application in the medical world. An example is a database of medical X-rays and associated data consisting of demographic, socioeconomic, physician's exam, medical laboratory and other information collected as part of a nationwide health survey conducted by the government. Another example is a collection of digitized cryosection images, CT and MR taken of cadavers as part of the National Library of Medicine's Visible Human Project. In both cases, the challenge is to provide access to both the image and the associated text for a wide end user community to create atlases, conduct epidemiological studies, to develop image-specific algorithms for compression, enhancement and other types of image processing, among many other applications. The databanks mentioned above are being created in prototype form. This paper describes the prototype system developed for the archiving of the data and the client software to enable a broad range of end users to access the archive, retrieve text and image data, display the data and manipulate the images. System design considerations include; data organization in a relational database management system with object-oriented extensions; a hierarchical organization of the image data by different resolution levels for different user classes; client design based on common hardware and software platforms incorporating SQL search capability, X Window, Motif and TAE (a development environment supporting rapid prototyping and management of graphic-oriented user interfaces); potential to include ultra high resolution display monitors as a user option; intuitive user interface paradigm for building complex queries; and contrast enhancement, magnification and mensuration tools for better viewing by the user.

  19. Comparison of the application of B-mode and strain elastography ultrasound in the estimation of lymph node metastasis of papillary thyroid carcinoma based on a radiomics approach.

    PubMed

    Liu, Tongtong; Ge, Xifeng; Yu, Jinhua; Guo, Yi; Wang, Yuanyuan; Wang, Wenping; Cui, Ligang

    2018-06-21

    B-mode ultrasound (B-US) and strain elastography ultrasound (SE-US) images have a potential to distinguish thyroid tumor with different lymph node (LN) status. The purpose of our study is to investigate whether the application of multi-modality images including B-US and SE-US can improve the discriminability of thyroid tumor with LN metastasis based on a radiomics approach. Ultrasound (US) images including B-US and SE-US images of 75 papillary thyroid carcinoma (PTC) cases were retrospectively collected. A radiomics approach was developed in this study to estimate LNs status of PTC patients. The approach included image segmentation, quantitative feature extraction, feature selection and classification. Three feature sets were extracted from B-US, SE-US, and multi-modality containing B-US and SE-US. They were used to evaluate the contribution of different modalities. A total of 684 radiomics features have been extracted in our study. We used sparse representation coefficient-based feature selection method with 10-bootstrap to reduce the dimension of feature sets. Support vector machine with leave-one-out cross-validation was used to build the model for estimating LN status. Using features extracted from both B-US and SE-US, the radiomics-based model produced an area under the receiver operating characteristic curve (AUC) [Formula: see text] 0.90, accuracy (ACC) [Formula: see text] 0.85, sensitivity (SENS) [Formula: see text] 0.77 and specificity (SPEC) [Formula: see text] 0.88, which was better than using features extracted from B-US or SE-US separately. Multi-modality images provided more information in radiomics study. Combining use of B-US and SE-US could improve the LN metastasis estimation accuracy for PTC patients.

  20. Three-dimensional registration of intravascular optical coherence tomography and cryo-image volumes for microscopic-resolution validation.

    PubMed

    Prabhu, David; Mehanna, Emile; Gargesha, Madhusudhana; Brandt, Eric; Wen, Di; van Ditzhuijzen, Nienke S; Chamie, Daniel; Yamamoto, Hirosada; Fujino, Yusuke; Alian, Ali; Patel, Jaymin; Costa, Marco; Bezerra, Hiram G; Wilson, David L

    2016-04-01

    Evidence suggests high-resolution, high-contrast, [Formula: see text] intravascular optical coherence tomography (IVOCT) can distinguish plaque types, but further validation is needed, especially for automated plaque characterization. We developed experimental and three-dimensional (3-D) registration methods to provide validation of IVOCT pullback volumes using microscopic, color, and fluorescent cryo-image volumes with optional registered cryo-histology. A specialized registration method matched IVOCT pullback images acquired in the catheter reference frame to a true 3-D cryo-image volume. Briefly, an 11-parameter registration model including a polynomial virtual catheter was initialized within the cryo-image volume, and perpendicular images were extracted, mimicking IVOCT image acquisition. Virtual catheter parameters were optimized to maximize cryo and IVOCT lumen overlap. Multiple assessments suggested that the registration error was better than the [Formula: see text] spacing between IVOCT image frames. Tests on a digital synthetic phantom gave a registration error of only [Formula: see text] (signed distance). Visual assessment of randomly presented nearby frames suggested registration accuracy within 1 IVOCT frame interval ([Formula: see text]). This would eliminate potential misinterpretations confronted by the typical histological approaches to validation, with estimated 1-mm errors. The method can be used to create annotated datasets and automated plaque classification methods and can be extended to other intravascular imaging modalities.

  1. Using Images of Women in American History

    ERIC Educational Resources Information Center

    Bennett, Linda B.; Williams, Frances Janeene

    2014-01-01

    Research on the inclusion of women in textbooks found severe inequalities in the way women were included in text and illustration. The use of carefully and purposefully selected images in the classroom can address both the lack of images of women in textbooks as well as the stereotypical portrayal of woman in textbook images.

  2. Mixed raster content (MRC) model for compound image compression

    NASA Astrophysics Data System (ADS)

    de Queiroz, Ricardo L.; Buckley, Robert R.; Xu, Ming

    1998-12-01

    This paper will describe the Mixed Raster Content (MRC) method for compressing compound images, containing both binary test and continuous-tone images. A single compression algorithm that simultaneously meets the requirements for both text and image compression has been elusive. MRC takes a different approach. Rather than using a single algorithm, MRC uses a multi-layered imaging model for representing the results of multiple compression algorithms, including ones developed specifically for text and for images. As a result, MRC can combine the best of existing or new compression algorithms and offer different quality-compression ratio tradeoffs. The algorithms used by MRC set the lower bound on its compression performance. Compared to existing algorithms, MRC has some image-processing overhead to manage multiple algorithms and the imaging model. This paper will develop the rationale for the MRC approach by describing the multi-layered imaging model in light of a rate-distortion trade-off. Results will be presented comparing images compressed using MRC, JPEG and state-of-the-art wavelet algorithms such as SPIHT. MRC has been approved or proposed as an architectural model for several standards, including ITU Color Fax, IETF Internet Fax, and JPEG 2000.

  3. A fast image encryption algorithm based on only blocks in cipher text

    NASA Astrophysics Data System (ADS)

    Wang, Xing-Yuan; Wang, Qian

    2014-03-01

    In this paper, a fast image encryption algorithm is proposed, in which the shuffling and diffusion is performed simultaneously. The cipher-text image is divided into blocks and each block has k ×k pixels, while the pixels of the plain-text are scanned one by one. Four logistic maps are used to generate the encryption key stream and the new place in the cipher image of plain image pixels, including the row and column of the block which the pixel belongs to and the place where the pixel would be placed in the block. After encrypting each pixel, the initial conditions of logistic maps would be changed according to the encrypted pixel's value; after encrypting each row of plain image, the initial condition would also be changed by the skew tent map. At last, it is illustrated that this algorithm has a faster speed, big key space, and better properties in withstanding differential attacks, statistical analysis, known plaintext, and chosen plaintext attacks.

  4. Accessibility of online self-management support websites for people with osteoarthritis: A text content analysis.

    PubMed

    Chapman, Lara; Brooks, Charlotte; Lawson, Jem; Russell, Cynthia; Adams, Jo

    2017-01-01

    Objectives This study assessed accessibility of online self-management support webpages for people with osteoarthritis by considering readability of text and inclusion of images and videos. Methods Eight key search terms developed and agreed with patient and public involvement representatives were entered into the Google search engine. Webpages from the first page of Google search results were identified. Readability of webpage text was assessed using two standardised readability indexes, and the number of images and videos included on each webpage was recorded. Results Forty-nine webpages met the inclusion criteria and were assessed. Only five of the webpages met the recommended reading level for health education literature. Almost half (44.9%) of webpages did not include any informative images to support written information. A minority of the webpages (6.12%) included relevant videos. Discussion Information provided on health webpages aiming to support patients to self-manage osteoarthritis may not be read, understood or used effectively by many people accessing it. Recommendations include using accessible language in health information, supplementing written information with visual resources and reviewing content and readability in collaboration with patient and public involvement representatives.

  5. Hypertext Image Retrieval: The Evolution of an Application.

    ERIC Educational Resources Information Center

    Roberts, G. Louis; Kenney, Carol E.

    1991-01-01

    Describes the development and implementation of a full-text image retrieval system at the Boeing Commercial Airplane Group. The conversion of card formats to a microcomputer-based system using HyperCard is described; the online system architecture is explained; and future plans are discussed, including conversion to digital images. (LRW)

  6. Image, word, action: interpersonal dynamics in a photo-sharing community.

    PubMed

    Suler, John

    2008-10-01

    In online photo-sharing communities, the individual's expression of self and the relationships that evolve among members is determined by the kinds of images that are shared, by the words exchanged among members, and by interpersonal actions that do not specifically rely on images or text. This article examines the dynamics of personal expression via images in Flickr, including a proposed system for identifying the dimensions of imagistic communication and a discussion of the psychological meanings embedded in a sequence of images. It explores how photographers use text descriptors to supplement their images and how different types of comments on photographs influence interpersonal relationships. The "fav"--when members choose an image as one of their favorites--is examined as one type of action that can serve a variety of interpersonal functions. Although images play a powerful role in the expression of self, it is the integration of images, words, and actions that maximize the development of relationships.

  7. Network of fully integrated multispecialty hospital imaging systems

    NASA Astrophysics Data System (ADS)

    Dayhoff, Ruth E.; Kuzmak, Peter M.

    1994-05-01

    The Department of Veterans Affairs (VA) DHCP Imaging System records clinically significant diagnostic images selected by medical specialists in a variety of departments, including radiology, cardiology, gastroenterology, pathology, dermatology, hematology, surgery, podiatry, dental clinic, and emergency room. These images are displayed on workstations located throughout a medical center. All images are managed by the VA's hospital information system, allowing integrated displays of text and image data across medical specialties. Clinicians can view screens of `thumbnail' images for all studies or procedures performed on a selected patient. Two VA medical centers currently have DHCP Imaging Systems installed, and others are planned. All VA medical centers and other VA facilities are connected by a wide area packet-switched network. The VA's electronic mail software has been modified to allow inclusion of binary data such as images in addition to the traditional text data. Testing of this multimedia electronic mail system is underway for medical teleconsultation.

  8. Taking Full Advantage of Children's Literature

    ERIC Educational Resources Information Center

    Serafini, Frank

    2012-01-01

    Teachers need a deeper understanding of the texts being discussed, in particular the various textual and visual aspects of picturebooks themselves, including the images, written text and design elements, to support how readers made sense of these texts. As teachers become familiar with aspects of literary criticism, art history, visual grammar,…

  9. Segmenting texts from outdoor images taken by mobile phones using color features

    NASA Astrophysics Data System (ADS)

    Liu, Zongyi; Zhou, Hanning

    2011-01-01

    Recognizing texts from images taken by mobile phones with low resolution has wide applications. It has been shown that a good image binarization can substantially improve the performances of OCR engines. In this paper, we present a framework to segment texts from outdoor images taken by mobile phones using color features. The framework consists of three steps: (i) the initial process including image enhancement, binarization and noise filtering, where we binarize the input images in each RGB channel, and apply component level noise filtering; (ii) grouping components into blocks using color features, where we compute the component similarities by dynamically adjusting the weights of RGB channels, and merge groups hierachically, and (iii) blocks selection, where we use the run-length features and choose the Support Vector Machine (SVM) as the classifier. We tested the algorithm using 13 outdoor images taken by an old-style LG-64693 mobile phone with 640x480 resolution. We compared the segmentation results with Tsar's algorithm, a state-of-the-art camera text detection algorithm, and show that our algorithm is more robust, particularly in terms of the false alarm rates. In addition, we also evaluated the impacts of our algorithm on the Abbyy's FineReader, one of the most popular commercial OCR engines in the market.

  10. Scene text detection via extremal region based double threshold convolutional network classification

    PubMed Central

    Zhu, Wei; Lou, Jing; Chen, Longtao; Xia, Qingyuan

    2017-01-01

    In this paper, we present a robust text detection approach in natural images which is based on region proposal mechanism. A powerful low-level detector named saliency enhanced-MSER extended from the widely-used MSER is proposed by incorporating saliency detection methods, which ensures a high recall rate. Given a natural image, character candidates are extracted from three channels in a perception-based illumination invariant color space by saliency-enhanced MSER algorithm. A discriminative convolutional neural network (CNN) is jointly trained with multi-level information including pixel-level and character-level information as character candidate classifier. Each image patch is classified as strong text, weak text and non-text by double threshold filtering instead of conventional one-step classification, leveraging confident scores obtained via CNN. To further prune non-text regions, we develop a recursive neighborhood search algorithm to track credible texts from weak text set. Finally, characters are grouped into text lines using heuristic features such as spatial location, size, color, and stroke width. We compare our approach with several state-of-the-art methods, and experiments show that our method achieves competitive performance on public datasets ICDAR 2011 and ICDAR 2013. PMID:28820891

  11. Design of the 2D electron cyclotron emission imaging instrument for the J-TEXT tokamak.

    PubMed

    Pan, X M; Yang, Z J; Ma, X D; Zhu, Y L; Luhmann, N C; Domier, C W; Ruan, B W; Zhuang, G

    2016-11-01

    A new 2D Electron Cyclotron Emission Imaging (ECEI) diagnostic is being developed for the J-TEXT tokamak. It will provide the 2D electron temperature information with high spatial, temporal, and temperature resolution. The new ECEI instrument is being designed to support fundamental physics investigations on J-TEXT including MHD, disruption prediction, and energy transport. The diagnostic contains two dual dipole antenna arrays corresponding to F band (90-140 GHz) and W band (75-110 GHz), respectively, and comprises a total of 256 channels. The system can observe the same magnetic surface at both the high field side and low field side simultaneously. An advanced optical system has been designed which permits the two arrays to focus on a wide continuous region or two radially separate regions with high imaging spatial resolution. It also incorporates excellent field curvature correction with field curvature adjustment lenses. An overview of the diagnostic and the technical progress including the new remote control technique are presented.

  12. Design of the 2D electron cyclotron emission imaging instrument for the J-TEXT tokamak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, X. M.; Yang, Z. J., E-mail: yangzj@hust.edu.cn; Ma, X. D.

    2016-11-15

    A new 2D Electron Cyclotron Emission Imaging (ECEI) diagnostic is being developed for the J-TEXT tokamak. It will provide the 2D electron temperature information with high spatial, temporal, and temperature resolution. The new ECEI instrument is being designed to support fundamental physics investigations on J-TEXT including MHD, disruption prediction, and energy transport. The diagnostic contains two dual dipole antenna arrays corresponding to F band (90-140 GHz) and W band (75-110 GHz), respectively, and comprises a total of 256 channels. The system can observe the same magnetic surface at both the high field side and low field side simultaneously. An advancedmore » optical system has been designed which permits the two arrays to focus on a wide continuous region or two radially separate regions with high imaging spatial resolution. It also incorporates excellent field curvature correction with field curvature adjustment lenses. An overview of the diagnostic and the technical progress including the new remote control technique are presented.« less

  13. New method for identifying features of an image on a digital video display

    NASA Astrophysics Data System (ADS)

    Doyle, Michael D.

    1991-04-01

    The MetaMap process extends the concept of direct manipulation human-computer interfaces to new limits. Its specific capabilities include the correlation of discrete image elements to relevant text information and the correlation of these image features to other images as well as to program control mechanisms. The correlation is accomplished through reprogramming of both the color map and the image so that discrete image elements comprise unique sets of color indices. This process allows the correlation to be accomplished with very efficient data storage and program execution times. Image databases adapted to this process become object-oriented as a result. Very sophisticated interrelationships can be set up between images text and program control mechanisms using this process. An application of this interfacing process to the design of an interactive atlas of medical histology as well as other possible applications are described. The MetaMap process is protected by U. S. patent #4

  14. Color separation in forensic image processing using interactive differential evolution.

    PubMed

    Mushtaq, Harris; Rahnamayan, Shahryar; Siddiqi, Areeb

    2015-01-01

    Color separation is an image processing technique that has often been used in forensic applications to differentiate among variant colors and to remove unwanted image interference. This process can reveal important information such as covered text or fingerprints in forensic investigation procedures. However, several limitations prevent users from selecting the appropriate parameters pertaining to the desired and undesired colors. This study proposes the hybridization of an interactive differential evolution (IDE) and a color separation technique that no longer requires users to guess required control parameters. The IDE algorithm optimizes these parameters in an interactive manner by utilizing human visual judgment to uncover desired objects. A comprehensive experimental verification has been conducted on various sample test images, including heavily obscured texts, texts with subtle color variations, and fingerprint smudges. The advantage of IDE is apparent as it effectively optimizes the color separation parameters at a level indiscernible to the naked eyes. © 2014 American Academy of Forensic Sciences.

  15. Correcting geometric and photometric distortion of document images on a smartphone

    NASA Astrophysics Data System (ADS)

    Simon, Christian; Williem; Park, In Kyu

    2015-01-01

    A set of document image processing algorithms for improving the optical character recognition (OCR) capability of smartphone applications is presented. The scope of the problem covers the geometric and photometric distortion correction of document images. The proposed framework was developed to satisfy industrial requirements. It is implemented on an off-the-shelf smartphone with limited resources in terms of speed and memory. Geometric distortions, i.e., skew and perspective distortion, are corrected by sending horizontal and vertical vanishing points toward infinity in a downsampled image. Photometric distortion includes image degradation from moiré pattern noise and specular highlights. Moiré pattern noise is removed using low-pass filters with different sizes independently applied to the background and text region. The contrast of the text in a specular highlighted area is enhanced by locally enlarging the intensity difference between the background and text while the noise is suppressed. Intensive experiments indicate that the proposed methods show a consistent and robust performance on a smartphone with a runtime of less than 1 s.

  16. Graph-based layout analysis for PDF documents

    NASA Astrophysics Data System (ADS)

    Xu, Canhui; Tang, Zhi; Tao, Xin; Li, Yun; Shi, Cao

    2013-03-01

    To increase the flexibility and enrich the reading experience of e-book on small portable screens, a graph based method is proposed to perform layout analysis on Portable Document Format (PDF) documents. Digital born document has its inherent advantages like representing texts and fractional images in explicit form, which can be straightforwardly exploited. To integrate traditional image-based document analysis and the inherent meta-data provided by PDF parser, the page primitives including text, image and path elements are processed to produce text and non text layer for respective analysis. Graph-based method is developed in superpixel representation level, and page text elements corresponding to vertices are used to construct an undirected graph. Euclidean distance between adjacent vertices is applied in a top-down manner to cut the graph tree formed by Kruskal's algorithm. And edge orientation is then used in a bottom-up manner to extract text lines from each sub tree. On the other hand, non-textual objects are segmented by connected component analysis. For each segmented text and non-text composite, a 13-dimensional feature vector is extracted for labelling purpose. The experimental results on selected pages from PDF books are presented.

  17. Tomorrow's Online in Today's CD-ROM: Interfaces and Images.

    ERIC Educational Resources Information Center

    Jacso, Peter

    1994-01-01

    Considers the appropriateness of using CD-ROM versus online systems. Topics discussed include cost effectiveness; how current the information is; full-text capabilities; a variety of interfaces; graphical user interfaces on CD-ROM; and possibilities for image representations. (LRW)

  18. Is the Recall of Verbal-Spatial Information from Working Memory Affected by Symptoms of ADHD?

    ERIC Educational Resources Information Center

    Caterino, Linda C.; Verdi, Michael P.

    2012-01-01

    Objective: The Kulhavy model for text learning using organized spatial displays proposes that learning will be increased when participants view visual images prior to related text. In contrast to previous studies, this study also included students who exhibited symptoms of ADHD. Method: Participants were presented with either a map-text or…

  19. Influence of Images on the Evaluation of Jams Using Conjoint Analysis Combined with Check-All-That-Apply (CATA) Questions.

    PubMed

    Miraballes, Marcelo; Gámbaro, Adriana

    2018-01-01

    A study of the influence of the use of images in a conjoint analysis combined with check-all-that apply (CATA) questions on jams was carried out. The relative importance of flavor and the information presented in the label in the willingness to purchase and the perception of how healthy the product is has been evaluated. Sixty consumers evaluated the stimuli presented only in text format (session 1), and another group of 60 consumers did so by receiving the stimuli in text format along with an image of the product (session 2). In addition, for each stimulus, consumers answered a CATA question consisting of 20 terms related to their involvement with the product. The perception of healthy increased when the texts were accompanied with images and also increased when the text included information. Willingness to purchase was only influenced by the flavor of the jams. The presence of images did not influence the CATA question's choice of terms, which were influenced by the information presented in the text. The use of a check-all-that-apply question in concepts provided an interesting possibility when they were combined with the results from the conjoint analysis, improving the comprehension of consumers' perception. Using CATA questions as an alternative way of evaluating consumer involvement seems to be beneficial and should be evaluated much further. © 2017 Institute of Food Technologists®.

  20. Graphic Warning Labels Elicit Affective and Thoughtful Responses from Smokers: Results of a Randomized Clinical Trial.

    PubMed

    Evans, Abigail T; Peters, Ellen; Strasser, Andrew A; Emery, Lydia F; Sheerin, Kaitlin M; Romer, Daniel

    2015-01-01

    Observational research suggests that placing graphic images on cigarette warning labels can reduce smoking rates, but field studies lack experimental control. Our primary objective was to determine the psychological processes set in motion by naturalistic exposure to graphic vs. text-only warnings in a randomized clinical trial involving exposure to modified cigarette packs over a 4-week period. Theories of graphic-warning impact were tested by examining affect toward smoking, credibility of warning information, risk perceptions, quit intentions, warning label memory, and smoking risk knowledge. Adults who smoked between 5 and 40 cigarettes daily (N = 293; mean age = 33.7), did not have a contra-indicated medical condition, and did not intend to quit were recruited from Philadelphia, PA and Columbus, OH. Smokers were randomly assigned to receive their own brand of cigarettes for four weeks in one of three warning conditions: text only, graphic images plus text, or graphic images with elaborated text. Data from 244 participants who completed the trial were analyzed in structural-equation models. The presence of graphic images (compared to text-only) caused more negative affect toward smoking, a process that indirectly influenced risk perceptions and quit intentions (e.g., image->negative affect->risk perception->quit intention). Negative affect from graphic images also enhanced warning credibility including through increased scrutiny of the warnings, a process that also indirectly affected risk perceptions and quit intentions (e.g., image->negative affect->risk scrutiny->warning credibility->risk perception->quit intention). Unexpectedly, elaborated text reduced warning credibility. Finally, graphic warnings increased warning-information recall and indirectly increased smoking-risk knowledge at the end of the trial and one month later. In the first naturalistic clinical trial conducted, graphic warning labels are more effective than text-only warnings in encouraging smokers to consider quitting and in educating them about smoking's risks. Negative affective reactions to smoking, thinking about risks, and perceptions of credibility are mediators of their impact. Clinicaltrials.gov NCT01782053.

  1. Mining biomedical images towards valuable information retrieval in biomedical and life sciences

    PubMed Central

    Ahmed, Zeeshan; Zeeshan, Saman; Dandekar, Thomas

    2016-01-01

    Biomedical images are helpful sources for the scientists and practitioners in drawing significant hypotheses, exemplifying approaches and describing experimental results in published biomedical literature. In last decades, there has been an enormous increase in the amount of heterogeneous biomedical image production and publication, which results in a need for bioimaging platforms for feature extraction and analysis of text and content in biomedical images to take advantage in implementing effective information retrieval systems. In this review, we summarize technologies related to data mining of figures. We describe and compare the potential of different approaches in terms of their developmental aspects, used methodologies, produced results, achieved accuracies and limitations. Our comparative conclusions include current challenges for bioimaging software with selective image mining, embedded text extraction and processing of complex natural language queries. PMID:27538578

  2. Contesting Contained Bodily Coaching Experiences

    ERIC Educational Resources Information Center

    Johnson, Richard

    2013-01-01

    Through critical readings of several images and texts, including photographs and artifacts in this collected montage, my aim here is to use multiple interactional analyses (visual culture techniques and deconstructive techniques) to assist in the critique of these presented visual images that represent current coaching policies in the USA.

  3. Enhancing L2 Reading Comprehension with Hypermedia Texts: Student Perceptions

    ERIC Educational Resources Information Center

    Garrett-Rucks, Paula; Howles, Les; Lake, William M.

    2015-01-01

    This study extends current research about L2 hypermedia texts by investigating the combined use of audiovisual features including: (a) Contextualized images, (b) rollover translations, (c) cultural information, (d) audio explanations and (e) comprehension check exercises. Specifically, student perceptions of hypermedia readings compared to…

  4. Change detection of medical images using dictionary learning techniques and principal component analysis.

    PubMed

    Nika, Varvara; Babyn, Paul; Zhu, Hongmei

    2014-07-01

    Automatic change detection methods for identifying the changes of serial MR images taken at different times are of great interest to radiologists. The majority of existing change detection methods in medical imaging, and those of brain images in particular, include many preprocessing steps and rely mostly on statistical analysis of magnetic resonance imaging (MRI) scans. Although most methods utilize registration software, tissue classification remains a difficult and overwhelming task. Recently, dictionary learning techniques are being used in many areas of image processing, such as image surveillance, face recognition, remote sensing, and medical imaging. We present an improved version of the EigenBlockCD algorithm, named the EigenBlockCD-2. The EigenBlockCD-2 algorithm performs an initial global registration and identifies the changes between serial MR images of the brain. Blocks of pixels from a baseline scan are used to train local dictionaries to detect changes in the follow-up scan. We use PCA to reduce the dimensionality of the local dictionaries and the redundancy of data. Choosing the appropriate distance measure significantly affects the performance of our algorithm. We examine the differences between [Formula: see text] and [Formula: see text] norms as two possible similarity measures in the improved EigenBlockCD-2 algorithm. We show the advantages of the [Formula: see text] norm over the [Formula: see text] norm both theoretically and numerically. We also demonstrate the performance of the new EigenBlockCD-2 algorithm for detecting changes of MR images and compare our results with those provided in the recent literature. Experimental results with both simulated and real MRI scans show that our improved EigenBlockCD-2 algorithm outperforms the previous methods. It detects clinical changes while ignoring the changes due to the patient's position and other acquisition artifacts.

  5. Interpretive versus noninterpretive content in top-selling radiology textbooks: what are we teaching medical students?

    PubMed

    Webb, Emily M; Vella, Maya; Straus, Christopher M; Phelps, Andrew; Naeger, David M

    2015-04-01

    There are little data as to whether appropriate, cost effective, and safe ordering of imaging examinations are adequately taught in US medical school curricula. We sought to determine the proportion of noninterpretive content (such as appropriate ordering) versus interpretive content (such as reading a chest x-ray) in the top-selling medical student radiology textbooks. We performed an online search to identify a ranked list of the six top-selling general radiology textbooks for medical students. Each textbook was reviewed including content in the text, tables, images, figures, appendices, practice questions, question explanations, and glossaries. Individual pages of text and individual images were semiquantitatively scored on a six-level scale as to the percentage of material that was interpretive versus noninterpretive. The predominant imaging modality addressed in each was also recorded. Descriptive statistical analysis was performed. All six books had more interpretive content. On average, 1.4 pages of text focused on interpretation for every one page focused on noninterpretive content. Seventeen images/figures were dedicated to interpretive skills for every one focused on noninterpretive skills. In all books, the largest proportion of text and image content was dedicated to plain films (51.2%), with computed tomography (CT) a distant second (16%). The content on radiographs (3.1:1) and CT (1.6:1) was more interpretive than not. The current six top-selling medical student radiology textbooks contain a preponderance of material teaching image interpretation compared to material teaching noninterpretive skills, such as appropriate imaging examination selection, rational utilization, and patient safety. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  6. The Viking Mosaic Catalog, Volume 2

    NASA Technical Reports Server (NTRS)

    Evans, N.

    1982-01-01

    A collection of more than 500 mosaics prepared from Viking Orbiter images is given. Accompanying each mosaic is a footprint plot, which identifies by location, picture number, and order number, each frame in the mosaic. Corner coordinates and pertinent imaging information are also included. A short text provides the camera characteristics, image format, and data processing information necessary for using the mosaic plates as a research aide. Procedures for ordering mosaic enlargements and individual images are also provided.

  7. School and Library Media. Introduction; The Uniform Computer Information Transactions Act (UCITA): More Critical for Educators than Copyright Law?; Redefining Professional Growth: New Attitudes, New Tools--A Case Study; Diversity in School Library Media Center Resources; Image-Text Relationships in Web Pages; Aiming for Effective Student Learning in Web-Based Courses: Insights from Student Experiences.

    ERIC Educational Resources Information Center

    Fitzgerald, Mary Ann; Gregory, Vicki L.; Brock, Kathy; Bennett, Elizabeth; Chen, Shu-Hsien Lai; Marsh, Emily; Moore, Joi L.; Kim, Kyung-Sun; Esser, Linda R.

    2002-01-01

    Chapters in this section of "Educational Media and Technology Yearbook" examine important trends prominent in the landscape of the school library media profession in 2001. Themes include mandated educational reform; diversity in school library resources; communication through image-text juxtaposition in Web pages; and professional development and…

  8. Mining biomedical images towards valuable information retrieval in biomedical and life sciences.

    PubMed

    Ahmed, Zeeshan; Zeeshan, Saman; Dandekar, Thomas

    2016-01-01

    Biomedical images are helpful sources for the scientists and practitioners in drawing significant hypotheses, exemplifying approaches and describing experimental results in published biomedical literature. In last decades, there has been an enormous increase in the amount of heterogeneous biomedical image production and publication, which results in a need for bioimaging platforms for feature extraction and analysis of text and content in biomedical images to take advantage in implementing effective information retrieval systems. In this review, we summarize technologies related to data mining of figures. We describe and compare the potential of different approaches in terms of their developmental aspects, used methodologies, produced results, achieved accuracies and limitations. Our comparative conclusions include current challenges for bioimaging software with selective image mining, embedded text extraction and processing of complex natural language queries. © The Author(s) 2016. Published by Oxford University Press.

  9. World Wide Web Based Image Search Engine Using Text and Image Content Features

    NASA Astrophysics Data System (ADS)

    Luo, Bo; Wang, Xiaogang; Tang, Xiaoou

    2003-01-01

    Using both text and image content features, a hybrid image retrieval system for Word Wide Web is developed in this paper. We first use a text-based image meta-search engine to retrieve images from the Web based on the text information on the image host pages to provide an initial image set. Because of the high-speed and low cost nature of the text-based approach, we can easily retrieve a broad coverage of images with a high recall rate and a relatively low precision. An image content based ordering is then performed on the initial image set. All the images are clustered into different folders based on the image content features. In addition, the images can be re-ranked by the content features according to the user feedback. Such a design makes it truly practical to use both text and image content for image retrieval over the Internet. Experimental results confirm the efficiency of the system.

  10. The interactive digital video interface

    NASA Technical Reports Server (NTRS)

    Doyle, Michael D.

    1989-01-01

    A frequent complaint in the computer oriented trade journals is that current hardware technology is progressing so quickly that software developers cannot keep up. A example of this phenomenon can be seen in the field of microcomputer graphics. To exploit the advantages of new mechanisms of information storage and retrieval, new approaches must be made towards incorporating existing programs as well as developing entirely new applications. A particular area of need is the correlation of discrete image elements to textural information. The interactive digital video (IDV) interface embodies a new concept in software design which addresses these needs. The IDV interface is a patented device and language independent process for identifying image features on a digital video display and which allows a number of different processes to be keyed to that identification. Its capabilities include the correlation of discrete image elements to relevant text information and the correlation of these image features to other images as well as to program control mechanisms. Sophisticated interrelationships can be set up between images, text, and program control mechanisms.

  11. Educational Value of Digital Whole Slides Accompanying Published Online Pathology Journal Articles: A Multi-Institutional Study.

    PubMed

    Yin, Feng; Han, Gang; Bui, Marilyn M; Gibbs, Julie; Martin, Ian; Sundharkrishnan, Lohini; King, Lauren; Jabcuga, Christine; Stuart, Lauren N; Hassell, Lewis A

    2016-07-01

    -Despite great interest in using whole slide imaging (WSI) in pathology practice and education, few pathology journals have published WSI pertinent to articles within their pages or as supplemental materials. -To evaluate whether there is measurable added educational value of including WSI in publications. -Thirty-seven participants, 16 (43.3%), 15 (40.5%), and 6 (16.2%) junior pathology residents (postgraduate year 1-2), senior pathology residents (postgraduate year 3-4), and board-certified pathologists, respectively, read a sequence of 10 journal articles on a wide range of pathology topics. A randomized subgroup also reviewed the WSI published with the articles. Both groups completed a survey tool assessing recall of text-based content and of image-based material pertinent to the diseases but not present in the fixed published images. -The group examining WSI had higher performance scores in 72% of image-based questions (36 of 50 questions) as compared with the non-WSI group. As an internal study control, the WSI group had higher performance scores in only 40% of text-based questions (6 of 15 questions). The WSI group had significantly better performance than the non-WSI group for image-based questions compared with text-based questions (P < .05, Fisher exact test). -Our study provides supporting evidence that WSI offers enhanced value to the learner beyond the text and fixed images selected by the author. We strongly encourage more journals to incorporate WSI into their publications.

  12. Diversification of visual media retrieval results using saliency detection

    NASA Astrophysics Data System (ADS)

    Muratov, Oleg; Boato, Giulia; De Natale, Franesco G. B.

    2013-03-01

    Diversification of retrieval results allows for better and faster search. Recently there has been proposed different methods for diversification of image retrieval results mainly utilizing text information and techniques imported from natural language processing domain. However, images contain visual information that is impossible to describe in text and the use of visual features is inevitable. Visual saliency is information about the main object of an image implicitly included by humans while creating visual content. For this reason it is naturally to exploit this information for the task of diversification of the content. In this work we study whether visual saliency can be used for the task of diversification and propose a method for re-ranking image retrieval results using saliency. The evaluation has shown that the use of saliency information results in higher diversity of retrieval results.

  13. Graphic Warning Labels Elicit Affective and Thoughtful Responses from Smokers: Results of a Randomized Clinical Trial

    PubMed Central

    Evans, Abigail T.; Peters, Ellen; Strasser, Andrew A.; Emery, Lydia F.; Sheerin, Kaitlin M.; Romer, Daniel

    2015-01-01

    Objective Observational research suggests that placing graphic images on cigarette warning labels can reduce smoking rates, but field studies lack experimental control. Our primary objective was to determine the psychological processes set in motion by naturalistic exposure to graphic vs. text-only warnings in a randomized clinical trial involving exposure to modified cigarette packs over a 4-week period. Theories of graphic-warning impact were tested by examining affect toward smoking, credibility of warning information, risk perceptions, quit intentions, warning label memory, and smoking risk knowledge. Methods Adults who smoked between 5 and 40 cigarettes daily (N = 293; mean age = 33.7), did not have a contra-indicated medical condition, and did not intend to quit were recruited from Philadelphia, PA and Columbus, OH. Smokers were randomly assigned to receive their own brand of cigarettes for four weeks in one of three warning conditions: text only, graphic images plus text, or graphic images with elaborated text. Results Data from 244 participants who completed the trial were analyzed in structural-equation models. The presence of graphic images (compared to text-only) caused more negative affect toward smoking, a process that indirectly influenced risk perceptions and quit intentions (e.g., image->negative affect->risk perception->quit intention). Negative affect from graphic images also enhanced warning credibility including through increased scrutiny of the warnings, a process that also indirectly affected risk perceptions and quit intentions (e.g., image->negative affect->risk scrutiny->warning credibility->risk perception->quit intention). Unexpectedly, elaborated text reduced warning credibility. Finally, graphic warnings increased warning-information recall and indirectly increased smoking-risk knowledge at the end of the trial and one month later. Conclusions In the first naturalistic clinical trial conducted, graphic warning labels are more effective than text-only warnings in encouraging smokers to consider quitting and in educating them about smoking’s risks. Negative affective reactions to smoking, thinking about risks, and perceptions of credibility are mediators of their impact. Trial Registration Clinicaltrials.gov NCT01782053 PMID:26672982

  14. Speeding up the Raster Scanning Methods used in theX-Ray Fluorescence Imaging of the Ancient Greek Text of Archimedes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, Manisha; /Norfolk State U.

    2006-08-24

    Progress has been made at the Stanford Linear Accelerator Center (SLAC) toward deciphering the remaining 10-20% of ancient Greek text contained in the Archimedes palimpsest. The text is known to contain valuable works by the mathematician, including the ''Method of Mechanical Theorems, the Equilibrium of Planes, On Floating Bodies'', and several diagrams as well. The only surviving copy of the text was recycled into a prayer book in the Middle Ages. The ink used to write on the goat skin parchment is partly composed of iron, which is visible by x-ray radiation. To image the palimpsest pages, the parchment ismore » framed and placed in a stage that moves according to the raster method. When an x-ray beam strikes the parchment, the iron in the ink is detected by a germanium detector. The resulting signal is converted to a gray-scale image on the imaging program, Rasplot. It is extremely important that each line of data is perfectly aligned with the line that came before it because the image is scanned in two directions. The objectives of this experiment were to determine the best parameters for producing well-aligned images and to reduce the scanning time. Imaging half a page of parchment during previous beam time for this project was achieved in thirty hours. Equations were produced to evaluate count time, shutter time, and the number of pixels in this experiment. On Beamline 6-2 at the Stanford Synchrotron Radiation Laboratory (SSRL), actual scanning time was reduced by one fourth. The remaining pages were successfully imaged and sent to ancient Greek experts for translation.« less

  15. Color doppler in clinical cardiology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duncan, W.J.

    1987-01-01

    A presentation of color doppler, which enables physicians to pinpoint problems and develop effective treatment. State-of-the-art illustrations and layout, with color images and explanatory text are included.

  16. Introduction to Color Imaging Science

    NASA Astrophysics Data System (ADS)

    Lee, Hsien-Che

    2005-04-01

    Color imaging technology has become almost ubiquitous in modern life in the form of monitors, liquid crystal screens, color printers, scanners, and digital cameras. This book is a comprehensive guide to the scientific and engineering principles of color imaging. It covers the physics of light and color, how the eye and physical devices capture color images, how color is measured and calibrated, and how images are processed. It stresses physical principles and includes a wealth of real-world examples. The book will be of value to scientists and engineers in the color imaging industry and, with homework problems, can also be used as a text for graduate courses on color imaging.

  17. Rectification of curved document images based on single view three-dimensional reconstruction.

    PubMed

    Kang, Lai; Wei, Yingmei; Jiang, Jie; Bai, Liang; Lao, Songyang

    2016-10-01

    Since distortions in camera-captured document images significantly affect the accuracy of optical character recognition (OCR), distortion removal plays a critical role for document digitalization systems using a camera for image capturing. This paper proposes a novel framework that performs three-dimensional (3D) reconstruction and rectification of camera-captured document images. While most existing methods rely on additional calibrated hardware or multiple images to recover the 3D shape of a document page, or make a simple but not always valid assumption on the corresponding 3D shape, our framework is more flexible and practical since it only requires a single input image and is able to handle a general locally smooth document surface. The main contributions of this paper include a new iterative refinement scheme for baseline fitting from connected components of text line, an efficient discrete vertical text direction estimation algorithm based on convex hull projection profile analysis, and a 2D distortion grid construction method based on text direction function estimation using 3D regularization. In order to examine the performance of our proposed method, both qualitative and quantitative evaluation and comparison with several recent methods are conducted in our experiments. The experimental results demonstrate that the proposed method outperforms relevant approaches for camera-captured document image rectification, in terms of improvements on both visual distortion removal and OCR accuracy.

  18. Factors in life science textbooks that may deter girls' interest in science

    NASA Astrophysics Data System (ADS)

    Potter, Ellen F.; Rosser, Sue V.

    In order to examine factors that may deter girls' interest in science, five seventh-grade life science textbooks were analyzed for sexism in language, images, and curricular content, and for features of activities that have been found to be useful for motivating girls. Although overt sexism was not apparent, subtle forms of sexism in the selection of language, images, and curricular content were found. Activities had some features useful to girls, but other features were seldom included. Teachers may wish to use differences that were found among texts as one basis for text selection.

  19. Recognition of pornographic web pages by classifying texts and images.

    PubMed

    Hu, Weiming; Wu, Ou; Chen, Zhouyao; Fu, Zhouyu; Maybank, Steve

    2007-06-01

    With the rapid development of the World Wide Web, people benefit more and more from the sharing of information. However, Web pages with obscene, harmful, or illegal content can be easily accessed. It is important to recognize such unsuitable, offensive, or pornographic Web pages. In this paper, a novel framework for recognizing pornographic Web pages is described. A C4.5 decision tree is used to divide Web pages, according to content representations, into continuous text pages, discrete text pages, and image pages. These three categories of Web pages are handled, respectively, by a continuous text classifier, a discrete text classifier, and an algorithm that fuses the results from the image classifier and the discrete text classifier. In the continuous text classifier, statistical and semantic features are used to recognize pornographic texts. In the discrete text classifier, the naive Bayes rule is used to calculate the probability that a discrete text is pornographic. In the image classifier, the object's contour-based features are extracted to recognize pornographic images. In the text and image fusion algorithm, the Bayes theory is used to combine the recognition results from images and texts. Experimental results demonstrate that the continuous text classifier outperforms the traditional keyword-statistics-based classifier, the contour-based image classifier outperforms the traditional skin-region-based image classifier, the results obtained by our fusion algorithm outperform those by either of the individual classifiers, and our framework can be adapted to different categories of Web pages.

  20. Drug related webpages classification using images and text information based on multi-kernel learning

    NASA Astrophysics Data System (ADS)

    Hu, Ruiguang; Xiao, Liping; Zheng, Wenjuan

    2015-12-01

    In this paper, multi-kernel learning(MKL) is used for drug-related webpages classification. First, body text and image-label text are extracted through HTML parsing, and valid images are chosen by the FOCARSS algorithm. Second, text based BOW model is used to generate text representation, and image-based BOW model is used to generate images representation. Last, text and images representation are fused with a few methods. Experimental results demonstrate that the classification accuracy of MKL is higher than those of all other fusion methods in decision level and feature level, and much higher than the accuracy of single-modal classification.

  1. Adaptive removal of background and white space from document images using seam categorization

    NASA Astrophysics Data System (ADS)

    Fillion, Claude; Fan, Zhigang; Monga, Vishal

    2011-03-01

    Document images are obtained regularly by rasterization of document content and as scans of printed documents. Resizing via background and white space removal is often desired for better consumption of these images, whether on displays or in print. While white space and background are easy to identify in images, existing methods such as naïve removal and content aware resizing (seam carving) each have limitations that can lead to undesirable artifacts, such as uneven spacing between lines of text or poor arrangement of content. An adaptive method based on image content is hence needed. In this paper we propose an adaptive method to intelligently remove white space and background content from document images. Document images are different from pictorial images in structure. They typically contain objects (text letters, pictures and graphics) separated by uniform background, which include both white paper space and other uniform color background. Pixels in uniform background regions are excellent candidates for deletion if resizing is required, as they introduce less change in document content and style, compared with deletion of object pixels. We propose a background deletion method that exploits both local and global context. The method aims to retain the document structural information and image quality.

  2. 47 CFR 14.21 - Performance Objectives.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... operate and use the product, including but not limited to, text, static or dynamic images, icons, labels.... (2) Connection point for external audio processing devices. Products providing auditory output shall...

  3. 47 CFR 14.21 - Performance Objectives.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... operate and use the product, including but not limited to, text, static or dynamic images, icons, labels.... (2) Connection point for external audio processing devices. Products providing auditory output shall...

  4. 47 CFR 7.3 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... necessary to operate and use the product, including but not limited to, text, static or dynamic images... disabilities to achieve access. (j) The term telecommunications equipment shall mean equipment, other than...

  5. 47 CFR 7.3 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... necessary to operate and use the product, including but not limited to, text, static or dynamic images... disabilities to achieve access. (j) The term telecommunications equipment shall mean equipment, other than...

  6. 47 CFR 7.3 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... necessary to operate and use the product, including but not limited to, text, static or dynamic images... disabilities to achieve access. (j) The term telecommunications equipment shall mean equipment, other than...

  7. 47 CFR 7.3 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... necessary to operate and use the product, including but not limited to, text, static or dynamic images... disabilities to achieve access. (j) The term telecommunications equipment shall mean equipment, other than...

  8. Images of Australia in Elementary Social Studies Texts: Some Alternative Strategies.

    ERIC Educational Resources Information Center

    Birchall, Gregory; Faichney, Gavin

    Elementary social studies textbooks in the United States were analyzed to determine the sort of information they contained about Australia. Only those texts which made substantive references to Australia were analyzed; these included 4 books for level 3, 2 for level 4, and 4 for level 6. Books examined were all published by major textbook…

  9. Advanced Digital Imaging Laboratory Using MATLAB® (Second edition)

    NASA Astrophysics Data System (ADS)

    Yaroslavsky, Leonid P.

    2016-09-01

    The first edition of this text book focussed on providing practical hands-on experience in digital imaging techniques for graduate students and practitioners keeping to a minimum any detailed discussion on the underlying theory. In this new extended edition, the author builds on the strength of the original edition by expanding the coverage to include formulation of the major theoretical results that underlie the exercises as well as introducing numerous modern concepts and new techniques. Whether you are studying or already using digital imaging techniques, developing proficiency in the subject is not possible without mastering practical skills. Including more than 100 MATLAB® exercises, this book delivers a complete applied course in digital imaging theory and practice. Part of IOP Series in Imaging Engineering Supplementary MATLAB codes and data files are available within Book Information.

  10. An assessment of multimodal imaging of subsurface text in mummy cartonnage using surrogate papyrus phantoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gibson, Adam; Piquette, Kathryn E.; Bergmann, Uwe

    Ancient Egyptian mummies were often covered with an outer casing, panels and masks made from cartonnage: a lightweight material made from linen, plaster, and recycled papyrus held together with adhesive. Egyptologists, papyrologists, and historians aim to recover and read extant text on the papyrus contained within cartonnage layers, but some methods, such as dissolving mummy casings, are destructive. The use of an advanced range of different imaging modalities was investigated to test the feasibility of non-destructive approaches applied to multi-layered papyrus found in ancient Egyptian mummy cartonnage. Eight different techniques were compared by imaging four synthetic phantoms designed to providemore » robust, well-understood, yet relevant sample standards using modern papyrus and replica inks. The techniques include optical (multispectral imaging with reflection and transillumination, and optical coherence tomography), X-ray (X-ray fluorescence imaging, X-ray fluorescence spectroscopy, X-ray micro computed tomography and phase contrast X-ray) and terahertz-based approaches. Optical imaging techniques were able to detect inks on all four phantoms, but were unable to significantly penetrate papyrus. X-ray-based techniques were sensitive to iron-based inks with excellent penetration but were not able to detect carbon-based inks. However, using terahertz imaging, it was possible to detect carbon-based inks with good penetration but with less sensitivity to iron-based inks. The phantoms allowed reliable and repeatable tests to be made at multiple sites on three continents. Finally, the tests demonstrated that each imaging modality needs to be optimised for this particular application: it is, in general, not sufficient to repurpose an existing device without modification. Furthermore, it is likely that no single imaging technique will to be able to robustly detect and enable the reading of text within ancient Egyptian mummy cartonnage. However, by carefully selecting, optimising and combining techniques, text contained within these fragile and rare artefacts may eventually be open to non-destructive imaging, identification, and interpretation.« less

  11. An assessment of multimodal imaging of subsurface text in mummy cartonnage using surrogate papyrus phantoms

    DOE PAGES

    Gibson, Adam; Piquette, Kathryn E.; Bergmann, Uwe; ...

    2018-02-26

    Ancient Egyptian mummies were often covered with an outer casing, panels and masks made from cartonnage: a lightweight material made from linen, plaster, and recycled papyrus held together with adhesive. Egyptologists, papyrologists, and historians aim to recover and read extant text on the papyrus contained within cartonnage layers, but some methods, such as dissolving mummy casings, are destructive. The use of an advanced range of different imaging modalities was investigated to test the feasibility of non-destructive approaches applied to multi-layered papyrus found in ancient Egyptian mummy cartonnage. Eight different techniques were compared by imaging four synthetic phantoms designed to providemore » robust, well-understood, yet relevant sample standards using modern papyrus and replica inks. The techniques include optical (multispectral imaging with reflection and transillumination, and optical coherence tomography), X-ray (X-ray fluorescence imaging, X-ray fluorescence spectroscopy, X-ray micro computed tomography and phase contrast X-ray) and terahertz-based approaches. Optical imaging techniques were able to detect inks on all four phantoms, but were unable to significantly penetrate papyrus. X-ray-based techniques were sensitive to iron-based inks with excellent penetration but were not able to detect carbon-based inks. However, using terahertz imaging, it was possible to detect carbon-based inks with good penetration but with less sensitivity to iron-based inks. The phantoms allowed reliable and repeatable tests to be made at multiple sites on three continents. Finally, the tests demonstrated that each imaging modality needs to be optimised for this particular application: it is, in general, not sufficient to repurpose an existing device without modification. Furthermore, it is likely that no single imaging technique will to be able to robustly detect and enable the reading of text within ancient Egyptian mummy cartonnage. However, by carefully selecting, optimising and combining techniques, text contained within these fragile and rare artefacts may eventually be open to non-destructive imaging, identification, and interpretation.« less

  12. Global image analysis to determine suitability for text-based image personalization

    NASA Astrophysics Data System (ADS)

    Ding, Hengzhou; Bala, Raja; Fan, Zhigang; Bouman, Charles A.; Allebach, Jan P.

    2012-03-01

    Lately, image personalization is becoming an interesting topic. Images with variable elements such as text usually appear much more appealing to the recipients. In this paper, we describe a method to pre-analyze the image and automatically suggest to the user the most suitable regions within an image for text-based personalization. The method is based on input gathered from experiments conducted with professional designers. It has been observed that regions that are spatially smooth and regions with existing text (e.g. signage, banners, etc.) are the best candidates for personalization. This gives rise to two sets of corresponding algorithms: one for identifying smooth areas, and one for locating text regions. Furthermore, based on the smooth and text regions found in the image, we derive an overall metric to rate the image in terms of its suitability for personalization (SFP).

  13. Stigmatizing Images in Obesity Health Campaign Messages and Healthy Behavioral Intentions.

    PubMed

    Young, Rachel; Subramanian, Roma; Hinnant, Amanda

    2016-08-01

    Background Antiobesity campaigns blaming individual behaviors for obesity have sparked concern that an emphasis on individual behavior may lead to stigmatization of overweight or obese people. Past studies have shown that perpetuating stigma is not effective for influencing behavior. Purpose This study examined whether stigmatizing or nonstigmatizing images and text in antiobesity advertisements led to differences in health-related behavioral intentions. Method Participants in this experiment were 161 American adults. Measures included self-reported body mass index, weight satisfaction, antifat attitudes, and intention to increase healthy behaviors. Results Images in particular prompted intention to increase healthy behavior, but only among participants who were not overweight or obese. Conclusion Images and text emphasizing individual responsibility for obesity may influence behavioral intention among those who are not overweight, but they do not seem to be effective at altering behavioral intentions among overweight people, the target audience for many antiobesity messages. Images in antiobesity messages intended to alter behavior are influential and should be selected carefully. © 2015 Society for Public Health Education.

  14. 78 FR 67076 - Practices and Procedures

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-08

    ... as an attachment in any common electronic format, including word processing applications, HTML and PDF. If possible, commenters are asked to use a text format and not an image format for attachments...

  15. Identifying and Overcoming Obstacles to Point-of-Care Data Collection for Eye Care Professionals

    PubMed Central

    Lobach, David F.; Silvey, Garry M.; Macri, Jennifer M.; Hunt, Megan; Kacmaz, Roje O.; Lee, Paul P.

    2005-01-01

    Supporting data entry by clinicians is considered one of the greatest challenges in implementing electronic health records. In this paper we describe a formative evaluation study using three different methodologies through which we identified obstacles to point-of-care data entry for eye care and then used the formative process to develop and test solutions to overcome these obstacles. The greatest obstacles were supporting free text annotation of clinical observations and accommodating the creation of detailed diagrams in multiple colors. To support free text entry, we arrived at an approach that captures an image of a free text note and associates this image with related data elements in an encounter note. The detailed diagrams included a color pallet that allowed changing pen color with a single stroke and also captured the diagrams as an image associated with related data elements. During observed sessions with simulated patients, these approaches satisfied the clinicians’ documentation needs by capturing the full range of clinical complexity that arises in practice. PMID:16779083

  16. RSA Key Development Using Fingerprint Image on Text Message

    NASA Astrophysics Data System (ADS)

    Rahman, Sayuti; Triana, Indah; Khairani, Sumi; Yasir, Amru; Sundari, Siti

    2017-12-01

    Along with the development of technology today, humans are very facilitated in accessing information and Communicate with various media, including through the Internet network . Messages are sent by media such as text are not necessarily guaranteed security. it is often found someone that wants to send a secret message to the recipient, but the messages can be known by irresponsible people. So the sender feels dissappointed because the secret message that should be known only to the recipient only becomes known by the irresponsible people . It is necessary to do security the message by using the RSA algorithm, Using fingerprint image to generate RSA key.This is a solution to enrich the security of a message,it is needed to process images firstly before generating RSA keys with feature extraction.

  17. What Can Pictures Tell Us About Web Pages? Improving Document Search Using Images.

    PubMed

    Rodriguez-Vaamonde, Sergio; Torresani, Lorenzo; Fitzgibbon, Andrew W

    2015-06-01

    Traditional Web search engines do not use the images in the HTML pages to find relevant documents for a given query. Instead, they typically operate by computing a measure of agreement between the keywords provided by the user and only the text portion of each page. In this paper we study whether the content of the pictures appearing in a Web page can be used to enrich the semantic description of an HTML document and consequently boost the performance of a keyword-based search engine. We present a Web-scalable system that exploits a pure text-based search engine to find an initial set of candidate documents for a given query. Then, the candidate set is reranked using visual information extracted from the images contained in the pages. The resulting system retains the computational efficiency of traditional text-based search engines with only a small additional storage cost needed to encode the visual information. We test our approach on one of the TREC Million Query Track benchmarks where we show that the exploitation of visual content yields improvement in accuracies for two distinct text-based search engines, including the system with the best reported performance on this benchmark. We further validate our approach by collecting document relevance judgements on our search results using Amazon Mechanical Turk. The results of this experiment confirm the improvement in accuracy produced by our image-based reranker over a pure text-based system.

  18. Miraculous Readings: Using Fantasy Novels about Reading to Reflect on Reading the Bible

    ERIC Educational Resources Information Center

    Dalton, Russell W.

    2009-01-01

    This article reflects on the vivid images of reading presented in several popular fantasy novels, including "The Spiderwick Chronicles," "The Great Good Thing," and "The Neverending Story." It suggests that these images can be used to help children, youth, and adults reflect on the nature of reading and the potential power of reading sacred texts.…

  19. Optimising web site designs for people with learning disabilities

    PubMed Central

    Williams, Peter; Hennig, Christian

    2015-01-01

    Much relevant internet-mediated information is inaccessible to people with learning disabilities because of difficulties in navigating the web. This paper reports on the methods undertaken to determine how information can be optimally presented for this cohort. Qualitative work is outlined where attributes relating to site layout affecting usability were elicited. A study comparing web sites of different design layouts exhibiting these attributes is discussed, with the emphasis on methodology. Eight interfaces were compared using various combinations of menu position (vertical or horizontal), text size and the absence or presence of images to determine which attributes of a site have the greatest performance impact. Study participants were also asked for their preferences, via a ‘smiley-face’ rating scale and simple interviews. ‘Acquiescence bias’ was minimised by avoiding polar (‘yes/no’) interrogatives, achieved by asking participants to compare layouts (such as horizontal versus vertical menu), with reasons coaxed from those able to articulate them. Preferred designs were for large text and images. This was the reverse of those facilitating fastest retrieval times, a discrepancy due to preferences being judged on aesthetic considerations. Design recommendations that reconcile preference and performance findings are offered. These include using a horizontal menu, juxtaposing images and text, and reducing text from sentences to phrases, thus facilitating preferred large text without increasing task times. PMID:26097431

  20. Generating descriptive visual words and visual phrases for large-scale image applications.

    PubMed

    Zhang, Shiliang; Tian, Qi; Hua, Gang; Huang, Qingming; Gao, Wen

    2011-09-01

    Bag-of-visual Words (BoWs) representation has been applied for various problems in the fields of multimedia and computer vision. The basic idea is to represent images as visual documents composed of repeatable and distinctive visual elements, which are comparable to the text words. Notwithstanding its great success and wide adoption, visual vocabulary created from single-image local descriptors is often shown to be not as effective as desired. In this paper, descriptive visual words (DVWs) and descriptive visual phrases (DVPs) are proposed as the visual correspondences to text words and phrases, where visual phrases refer to the frequently co-occurring visual word pairs. Since images are the carriers of visual objects and scenes, a descriptive visual element set can be composed by the visual words and their combinations which are effective in representing certain visual objects or scenes. Based on this idea, a general framework is proposed for generating DVWs and DVPs for image applications. In a large-scale image database containing 1506 object and scene categories, the visual words and visual word pairs descriptive to certain objects or scenes are identified and collected as the DVWs and DVPs. Experiments show that the DVWs and DVPs are informative and descriptive and, thus, are more comparable with the text words than the classic visual words. We apply the identified DVWs and DVPs in several applications including large-scale near-duplicated image retrieval, image search re-ranking, and object recognition. The combination of DVW and DVP performs better than the state of the art in large-scale near-duplicated image retrieval in terms of accuracy, efficiency and memory consumption. The proposed image search re-ranking algorithm: DWPRank outperforms the state-of-the-art algorithm by 12.4% in mean average precision and about 11 times faster in efficiency.

  1. Categorizing biomedicine images using novel image features and sparse coding representation

    PubMed Central

    2013-01-01

    Background Images embedded in biomedical publications carry rich information that often concisely summarize key hypotheses adopted, methods employed, or results obtained in a published study. Therefore, they offer valuable clues for understanding main content in a biomedical publication. Prior studies have pointed out the potential of mining images embedded in biomedical publications for automatically understanding and retrieving such images' associated source documents. Within the broad area of biomedical image processing, categorizing biomedical images is a fundamental step for building many advanced image analysis, retrieval, and mining applications. Similar to any automatic categorization effort, discriminative image features can provide the most crucial aid in the process. Method We observe that many images embedded in biomedical publications carry versatile annotation text. Based on the locations of and the spatial relationships between these text elements in an image, we thus propose some novel image features for image categorization purpose, which quantitatively characterize the spatial positions and distributions of text elements inside a biomedical image. We further adopt a sparse coding representation (SCR) based technique to categorize images embedded in biomedical publications by leveraging our newly proposed image features. Results we randomly selected 990 images of the JPG format for use in our experiments where 310 images were used as training samples and the rest were used as the testing cases. We first segmented 310 sample images following the our proposed procedure. This step produced a total of 1035 sub-images. We then manually labeled all these sub-images according to the two-level hierarchical image taxonomy proposed by [1]. Among our annotation results, 316 are microscopy images, 126 are gel electrophoresis images, 135 are line charts, 156 are bar charts, 52 are spot charts, 25 are tables, 70 are flow charts, and the remaining 155 images are of the type "others". A serial of experimental results are obtained. Firstly, each image categorizing results is presented, and next image categorizing performance indexes such as precision, recall, F-score, are all listed. Different features which include conventional image features and our proposed novel features indicate different categorizing performance, and the results are demonstrated. Thirdly, we conduct an accuracy comparison between support vector machine classification method and our proposed sparse representation classification method. At last, our proposed approach is compared with three peer classification method and experimental results verify our impressively improved performance. Conclusions Compared with conventional image features that do not exploit characteristics regarding text positions and distributions inside images embedded in biomedical publications, our proposed image features coupled with the SR based representation model exhibit superior performance for classifying biomedical images as demonstrated in our comparative benchmark study. PMID:24565470

  2. From regular text to artistic writing and artworks: Fourier statistics of images with low and high aesthetic appeal

    PubMed Central

    Melmer, Tamara; Amirshahi, Seyed A.; Koch, Michael; Denzler, Joachim; Redies, Christoph

    2013-01-01

    The spatial characteristics of letters and their influence on readability and letter identification have been intensely studied during the last decades. There have been few studies, however, on statistical image properties that reflect more global aspects of text, for example properties that may relate to its aesthetic appeal. It has been shown that natural scenes and a large variety of visual artworks possess a scale-invariant Fourier power spectrum that falls off linearly with increasing frequency in log-log plots. We asked whether images of text share this property. As expected, the Fourier spectrum of images of regular typed or handwritten text is highly anisotropic, i.e., the spectral image properties in vertical, horizontal, and oblique orientations differ. Moreover, the spatial frequency spectra of text images are not scale-invariant in any direction. The decline is shallower in the low-frequency part of the spectrum for text than for aesthetic artworks, whereas, in the high-frequency part, it is steeper. These results indicate that, in general, images of regular text contain less global structure (low spatial frequencies) relative to fine detail (high spatial frequencies) than images of aesthetics artworks. Moreover, we studied images of text with artistic claim (ornate print and calligraphy) and ornamental art. For some measures, these images assume average values intermediate between regular text and aesthetic artworks. Finally, to answer the question of whether the statistical properties measured by us are universal amongst humans or are subject to intercultural differences, we compared images from three different cultural backgrounds (Western, East Asian, and Arabic). Results for different categories (regular text, aesthetic writing, ornamental art, and fine art) were similar across cultures. PMID:23554592

  3. Encoder: A Connectionist Model of How Learning to Visually Encode Fixated Text Images Improves Reading Fluency

    ERIC Educational Resources Information Center

    Martin, Gale L.

    2004-01-01

    This article proposes that visual encoding learning improves reading fluency by widening the span over which letters are recognized from a fixated text image so that fewer fixations are needed to cover a text line. Encoder is a connectionist model that learns to convert images like the fixated text images human readers encode into the…

  4. Study on Hybrid Image Search Technology Based on Texts and Contents

    NASA Astrophysics Data System (ADS)

    Wang, H. T.; Ma, F. L.; Yan, C.; Pan, H.

    2018-05-01

    Image search was studied first here based on texts and contents, respectively. The text-based image feature extraction was put forward by integrating the statistical and topic features in view of the limitation of extraction of keywords only by means of statistical features of words. On the other hand, a search-by-image method was put forward based on multi-feature fusion in view of the imprecision of the content-based image search by means of a single feature. The layered-searching method depended on primarily the text-based image search method and additionally the content-based image search was then put forward in view of differences between the text-based and content-based methods and their difficult direct fusion. The feasibility and effectiveness of the hybrid search algorithm were experimentally verified.

  5. Image/text automatic indexing and retrieval system using context vector approach

    NASA Astrophysics Data System (ADS)

    Qing, Kent P.; Caid, William R.; Ren, Clara Z.; McCabe, Patrick

    1995-11-01

    Thousands of documents and images are generated daily both on and off line on the information superhighway and other media. Storage technology has improved rapidly to handle these data but indexing this information is becoming very costly. HNC Software Inc. has developed a technology for automatic indexing and retrieval of free text and images. This technique is demonstrated and is based on the concept of `context vectors' which encode a succinct representation of the associated text and features of sub-image. In this paper, we will describe the Automated Librarian System which was designed for free text indexing and the Image Content Addressable Retrieval System (ICARS) which extends the technique from the text domain into the image domain. Both systems have the ability to automatically assign indices for a new document and/or image based on the content similarities in the database. ICARS also has the capability to retrieve images based on similarity of content using index terms, text description, and user-generated images as a query without performing segmentation or object recognition.

  6. Multisite concordance of apparent diffusion coefficient measurements across the NCI Quantitative Imaging Network.

    PubMed

    Newitt, David C; Malyarenko, Dariya; Chenevert, Thomas L; Quarles, C Chad; Bell, Laura; Fedorov, Andriy; Fennessy, Fiona; Jacobs, Michael A; Solaiyappan, Meiyappan; Hectors, Stefanie; Taouli, Bachir; Muzi, Mark; Kinahan, Paul E; Schmainda, Kathleen M; Prah, Melissa A; Taber, Erin N; Kroenke, Christopher; Huang, Wei; Arlinghaus, Lori R; Yankeelov, Thomas E; Cao, Yue; Aryal, Madhava; Yen, Yi-Fen; Kalpathy-Cramer, Jayashree; Shukla-Dave, Amita; Fung, Maggie; Liang, Jiachao; Boss, Michael; Hylton, Nola

    2018-01-01

    Diffusion weighted MRI has become ubiquitous in many areas of medicine, including cancer diagnosis and treatment response monitoring. Reproducibility of diffusion metrics is essential for their acceptance as quantitative biomarkers in these areas. We examined the variability in the apparent diffusion coefficient (ADC) obtained from both postprocessing software implementations utilized by the NCI Quantitative Imaging Network and online scan time-generated ADC maps. Phantom and in vivo breast studies were evaluated for two ([Formula: see text]) and four ([Formula: see text]) [Formula: see text]-value diffusion metrics. Concordance of the majority of implementations was excellent for both phantom ADC measures and in vivo [Formula: see text], with relative biases [Formula: see text] ([Formula: see text]) and [Formula: see text] (phantom [Formula: see text]) but with higher deviations in ADC at the lowest phantom ADC values. In vivo [Formula: see text] concordance was good, with typical biases of [Formula: see text] to 3% but higher for online maps. Multiple b -value ADC implementations were separated into two groups determined by the fitting algorithm. Intergroup mean ADC differences ranged from negligible for phantom data to 2.8% for [Formula: see text] in vivo data. Some higher deviations were found for individual implementations and online parametric maps. Despite generally good concordance, implementation biases in ADC measures are sometimes significant and may be large enough to be of concern in multisite studies.

  7. Pancreatic Cancer Detection Consortium (PCDC) | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"2256","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"A 3-dimensional image of a human torso highlighting the pancreas.","field_file_image_title_text[und][0][value]":false},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"A 3-dimensional image of a human torso

  8. BreakingNews: Article Annotation by Image and Text Processing.

    PubMed

    Ramisa, Arnau; Yan, Fei; Moreno-Noguer, Francesc; Mikolajczyk, Krystian

    2018-05-01

    Building upon recent Deep Neural Network architectures, current approaches lying in the intersection of Computer Vision and Natural Language Processing have achieved unprecedented breakthroughs in tasks like automatic captioning or image retrieval. Most of these learning methods, though, rely on large training sets of images associated with human annotations that specifically describe the visual content. In this paper we propose to go a step further and explore the more complex cases where textual descriptions are loosely related to the images. We focus on the particular domain of news articles in which the textual content often expresses connotative and ambiguous relations that are only suggested but not directly inferred from images. We introduce an adaptive CNN architecture that shares most of the structure for multiple tasks including source detection, article illustration and geolocation of articles. Deep Canonical Correlation Analysis is deployed for article illustration, and a new loss function based on Great Circle Distance is proposed for geolocation. Furthermore, we present BreakingNews, a novel dataset with approximately 100K news articles including images, text and captions, and enriched with heterogeneous meta-data (such as GPS coordinates and user comments). We show this dataset to be appropriate to explore all aforementioned problems, for which we provide a baseline performance using various Deep Learning architectures, and different representations of the textual and visual features. We report very promising results and bring to light several limitations of current state-of-the-art in this kind of domain, which we hope will help spur progress in the field.

  9. Segmentation and feature extraction of cervical spine x-ray images

    NASA Astrophysics Data System (ADS)

    Long, L. Rodney; Thoma, George R.

    1999-05-01

    As part of an R&D project in mixed text/image database design, the National Library of Medicine has archived a collection of 17,000 digitized x-ray images of the cervical and lumbar spine which were collected as part of the second National Health and Nutrition Examination Survey (NHANES II). To make this image data available and usable to a wide audience, we are investigating techniques for indexing the image content by automated or semi-automated means. Indexing of the images by features of interest to researchers in spine disease and structure requires effective segmentation of the vertebral anatomy. This paper describes work in progress toward this segmentation of the cervical spine images into anatomical components of interest, including anatomical landmarks for vertebral location, and segmentation and identification of individual vertebrae. Our work includes developing a reliable method for automatically fixing an anatomy-based coordinate system in the images, and work to adaptively threshold the images, using methods previously applied by researchers in cardioangiography. We describe the motivation for our work and present our current results in both areas.

  10. Teach Ourselves: Social Networks for CS Stem Education

    DTIC Science & Technology

    2014-12-01

    with peers. Teach Ourselves includes features that were inspired by recent research on the engaging properties of computer games , including the chance...15 i 1.0 SUMMARY In an online learning community (“ Teach Ourselves...content shared and viewed on the Internet, including text, images, videos and even home-authored games . The ease with which content can now be created

  11. Translating statistical images to text summaries for partially sighted persons on mobile devices: iconic image maps approach

    NASA Astrophysics Data System (ADS)

    Williams, Godfried B.

    2005-03-01

    This paper attempts to demonstrate a novel based idea for transforming statistical image data to text using autoassociative and unsupervised artificial neural network and iconic image maps using the shape and texture genetic algorithm, underlying concepts translating the image data to text. Full details of experiments could be assessed at http://www.uel.ac.uk/seis/applications/.

  12. Development of educational image databases and e-books for medical physics training.

    PubMed

    Tabakov, S; Roberts, V C; Jonsson, B-A; Ljungberg, M; Lewis, C A; Wirestam, R; Strand, S-E; Lamm, I-L; Milano, F; Simmons, A; Deane, C; Goss, D; Aitken, V; Noel, A; Giraud, J-Y; Sherriff, S; Smith, P; Clarke, G; Almqvist, M; Jansson, T

    2005-09-01

    Medical physics education and training requires the use of extensive imaging material and specific explanations. These requirements provide an excellent background for application of e-Learning. The EU projects Consortia EMERALD and EMIT developed five volumes of such materials, now used in 65 countries. EMERALD developed e-Learning materials in three areas of medical physics (X-ray diagnostic radiology, nuclear medicine and radiotherapy). EMIT developed e-Learning materials in two further areas: ultrasound and magnetic resonance imaging. This paper describes the development of these e-Learning materials (consisting of e-books and educational image databases). The e-books include tasks helping studying of various equipment and methods. The text of these PDF e-books is hyperlinked with respective images. The e-books are used through the readers' own Internet browser. Each Image Database (IDB) includes a browser, which displays hundreds of images of equipment, block diagrams and graphs, image quality examples, artefacts, etc. Both the e-books and IDB are engraved on five separate CD-ROMs. Demo of these materials can be taken from www.emerald2.net.

  13. Optimal classification for the diagnosis of duchenne muscular dystrophy images using support vector machines.

    PubMed

    Zhang, Ming-Huan; Ma, Jun-Shan; Shen, Ying; Chen, Ying

    2016-09-01

    This study aimed to investigate the optimal support vector machines (SVM)-based classifier of duchenne muscular dystrophy (DMD) magnetic resonance imaging (MRI) images. T1-weighted (T1W) and T2-weighted (T2W) images of the 15 boys with DMD and 15 normal controls were obtained. Textural features of the images were extracted and wavelet decomposed, and then, principal features were selected. Scale transform was then performed for MRI images. Afterward, SVM-based classifiers of MRI images were analyzed based on the radical basis function and decomposition levels. The cost (C) parameter and kernel parameter [Formula: see text] were used for classification. Then, the optimal SVM-based classifier, expressed as [Formula: see text]), was identified by performance evaluation (sensitivity, specificity and accuracy). Eight of 12 textural features were selected as principal features (eigenvalues [Formula: see text]). The 16 SVM-based classifiers were obtained using combination of (C, [Formula: see text]), and those with lower C and [Formula: see text] values showed higher performances, especially classifier of [Formula: see text]). The SVM-based classifiers of T1W images showed higher performance than T1W images at the same decomposition level. The T1W images in classifier of [Formula: see text]) at level 2 decomposition showed the highest performance of all, and its overall correct sensitivity, specificity, and accuracy reached 96.9, 97.3, and 97.1 %, respectively. The T1W images in SVM-based classifier [Formula: see text] at level 2 decomposition showed the highest performance of all, demonstrating that it was the optimal classification for the diagnosis of DMD.

  14. Lung nodule malignancy classification using only radiologist-quantified image features as inputs to statistical learning algorithms: probing the Lung Image Database Consortium dataset with two statistical learning methods.

    PubMed

    Hancock, Matthew C; Magnan, Jerry F

    2016-10-01

    In the assessment of nodules in CT scans of the lungs, a number of image-derived features are diagnostically relevant. Currently, many of these features are defined only qualitatively, so they are difficult to quantify from first principles. Nevertheless, these features (through their qualitative definitions and interpretations thereof) are often quantified via a variety of mathematical methods for the purpose of computer-aided diagnosis (CAD). To determine the potential usefulness of quantified diagnostic image features as inputs to a CAD system, we investigate the predictive capability of statistical learning methods for classifying nodule malignancy. We utilize the Lung Image Database Consortium dataset and only employ the radiologist-assigned diagnostic feature values for the lung nodules therein, as well as our derived estimates of the diameter and volume of the nodules from the radiologists' annotations. We calculate theoretical upper bounds on the classification accuracy that are achievable by an ideal classifier that only uses the radiologist-assigned feature values, and we obtain an accuracy of 85.74 [Formula: see text], which is, on average, 4.43% below the theoretical maximum of 90.17%. The corresponding area-under-the-curve (AUC) score is 0.932 ([Formula: see text]), which increases to 0.949 ([Formula: see text]) when diameter and volume features are included and has an accuracy of 88.08 [Formula: see text]. Our results are comparable to those in the literature that use algorithmically derived image-based features, which supports our hypothesis that lung nodules can be classified as malignant or benign using only quantified, diagnostic image features, and indicates the competitiveness of this approach. We also analyze how the classification accuracy depends on specific features and feature subsets, and we rank the features according to their predictive power, statistically demonstrating the top four to be spiculation, lobulation, subtlety, and calcification.

  15. 75 FR 52780 - Notice of Availability of Final Supplemental Environmental Impact Statement for the Moore Ranch...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-27

    ... considered, but were eliminated from detailed analysis include: conventional mining (whether by open pit or... Agencywide Documents and Management System (ADAMS), which provides text and image files of the NRC's public...

  16. Providing an integrated clinical data view in a hospital information system that manages multimedia data.

    PubMed

    Dayhoff, R E; Maloney, D L; Kenney, T J; Fletcher, R D

    1991-01-01

    The VA's hospital information system, the Decentralized Hospital Computer Program (DHCP), is an integrated system based on a powerful set of software tools with shared data accessible from any of its application modules. It includes many functionally specific application subsystems such as laboratory, pharmacy, radiology, and dietetics. Physicians need applications that cross these application boundaries to provide useful and convenient patient data. One of these multi-specialty applications, the DHCP Imaging System, integrates multimedia data to provide clinicians with comprehensive patient-oriented information. User requirements for cross-disciplinary image access can be studied to define needs for similar text data access. Integration approaches must be evaluated both for their ability to deliver patient-oriented text data rapidly and their ability to integrate multimedia data objects. Several potential integration approaches are described as they relate to the DHCP Imaging System.

  17. Providing an integrated clinical data view in a hospital information system that manages multimedia data.

    PubMed Central

    Dayhoff, R. E.; Maloney, D. L.; Kenney, T. J.; Fletcher, R. D.

    1991-01-01

    The VA's hospital information system, the Decentralized Hospital Computer Program (DHCP), is an integrated system based on a powerful set of software tools with shared data accessible from any of its application modules. It includes many functionally specific application subsystems such as laboratory, pharmacy, radiology, and dietetics. Physicians need applications that cross these application boundaries to provide useful and convenient patient data. One of these multi-specialty applications, the DHCP Imaging System, integrates multimedia data to provide clinicians with comprehensive patient-oriented information. User requirements for cross-disciplinary image access can be studied to define needs for similar text data access. Integration approaches must be evaluated both for their ability to deliver patient-oriented text data rapidly and their ability to integrate multimedia data objects. Several potential integration approaches are described as they relate to the DHCP Imaging System. PMID:1807651

  18. How to use the WWW to distribute STI

    NASA Technical Reports Server (NTRS)

    Roper, Donna G.

    1994-01-01

    This presentation explains how to use the World Wide Web (WWW) to distribute scientific and technical information as hypermedia. WWW clients and servers use the HyperText Transfer Protocol (HTTP) to transfer documents containing links to other text, graphics, video, and sound. The standard language for these documents is the HyperText MarkUp Language (HTML). These are simply text files with formatting codes that contain layout information and hyperlinks. HTML documents can be created with any text editor or with one of the publicly available HTML editors or convertors. HTML can also include links to available image formats. This presentation is available online. The URL is http://sti.larc.nasa. (followed by) gov/demos/workshop/introtext.html.

  19. Image Analysis via Fuzzy-Reasoning Approach: Prototype Applications at NASA

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A.; Klinko, Steven J.

    2004-01-01

    A set of imaging techniques based on Fuzzy Reasoning (FR) approach was built for NASA at Kennedy Space Center (KSC) to perform complex real-time visual-related safety prototype tasks, such as detection and tracking of moving Foreign Objects Debris (FOD) during the NASA Space Shuttle liftoff and visual anomaly detection on slidewires used in the emergency egress system for Space Shuttle at the launch pad. The system has also proved its prospective in enhancing X-ray images used to screen hard-covered items leading to a better visualization. The system capability was used as well during the imaging analysis of the Space Shuttle Columbia accident. These FR-based imaging techniques include novel proprietary adaptive image segmentation, image edge extraction, and image enhancement. Probabilistic Neural Network (PNN) scheme available from NeuroShell(TM) Classifier and optimized via Genetic Algorithm (GA) was also used along with this set of novel imaging techniques to add powerful learning and image classification capabilities. Prototype applications built using these techniques have received NASA Space Awards, including a Board Action Award, and are currently being filed for patents by NASA; they are being offered for commercialization through the Research Triangle Institute (RTI), an internationally recognized corporation in scientific research and technology development. Companies from different fields, including security, medical, text digitalization, and aerospace, are currently in the process of licensing these technologies from NASA.

  20. Text-image alignment for historical handwritten documents

    NASA Astrophysics Data System (ADS)

    Zinger, S.; Nerbonne, J.; Schomaker, L.

    2009-01-01

    We describe our work on text-image alignment in context of building a historical document retrieval system. We aim at aligning images of words in handwritten lines with their text transcriptions. The images of handwritten lines are automatically segmented from the scanned pages of historical documents and then manually transcribed. To train automatic routines to detect words in an image of handwritten text, we need a training set - images of words with their transcriptions. We present our results on aligning words from the images of handwritten lines and their corresponding text transcriptions. Alignment based on the longest spaces between portions of handwriting is a baseline. We then show that relative lengths, i.e. proportions of words in their lines, can be used to improve the alignment results considerably. To take into account the relative word length, we define the expressions for the cost function that has to be minimized for aligning text words with their images. We apply right to left alignment as well as alignment based on exhaustive search. The quality assessment of these alignments shows correct results for 69% of words from 100 lines, or 90% of partially correct and correct alignments combined.

  1. Text analysis devices, articles of manufacture, and text analysis methods

    DOEpatents

    Turner, Alan E; Hetzler, Elizabeth G; Nakamura, Grant C

    2015-03-31

    Text analysis devices, articles of manufacture, and text analysis methods are described according to some aspects. In one aspect, a text analysis device includes a display configured to depict visible images, and processing circuitry coupled with the display and wherein the processing circuitry is configured to access a first vector of a text item and which comprises a plurality of components, to access a second vector of the text item and which comprises a plurality of components, to weight the components of the first vector providing a plurality of weighted values, to weight the components of the second vector providing a plurality of weighted values, and to combine the weighted values of the first vector with the weighted values of the second vector to provide a third vector.

  2. Localizing text in scene images by boundary clustering, stroke segmentation, and string fragment classification.

    PubMed

    Yi, Chucai; Tian, Yingli

    2012-09-01

    In this paper, we propose a novel framework to extract text regions from scene images with complex backgrounds and multiple text appearances. This framework consists of three main steps: boundary clustering (BC), stroke segmentation, and string fragment classification. In BC, we propose a new bigram-color-uniformity-based method to model both text and attachment surface, and cluster edge pixels based on color pairs and spatial positions into boundary layers. Then, stroke segmentation is performed at each boundary layer by color assignment to extract character candidates. We propose two algorithms to combine the structural analysis of text stroke with color assignment and filter out background interferences. Further, we design a robust string fragment classification based on Gabor-based text features. The features are obtained from feature maps of gradient, stroke distribution, and stroke width. The proposed framework of text localization is evaluated on scene images, born-digital images, broadcast video images, and images of handheld objects captured by blind persons. Experimental results on respective datasets demonstrate that the framework outperforms state-of-the-art localization algorithms.

  3. Preserving the Illustrated Text. Report of the Joint Task Force on Text and Image.

    ERIC Educational Resources Information Center

    Commission on Preservation and Access, Washington, DC.

    The mission of the Joint Task Force on Text and Image was to inquire into the problems, needs, and methods for preserving images in text that are important for scholarship in a wide range of disciplines and to draw from that exploration a set of principles, guidelines, and recommendations for a comprehensive national strategy for image…

  4. Research as Repatriation.

    ERIC Educational Resources Information Center

    Plum, Terry; Smalley, Topsy N.

    1994-01-01

    Discussion of humanities research focuses on the humanist patron as author of the text. Highlights include the research process; style of expression; interpretation; multivocality; reflexivity; social validation; repatriation; the image of the library for the author; patterns of searching behavior; and reference librarian responses. (37…

  5. Text String Detection from Natural Scenes by Structure-based Partition and Grouping

    PubMed Central

    Yi, Chucai; Tian, YingLi

    2012-01-01

    Text information in natural scene images serves as important clues for many image-based applications such as scene understanding, content-based image retrieval, assistive navigation, and automatic geocoding. However, locating text from complex background with multiple colors is a challenging task. In this paper, we explore a new framework to detect text strings with arbitrary orientations in complex natural scene images. Our proposed framework of text string detection consists of two steps: 1) Image partition to find text character candidates based on local gradient features and color uniformity of character components. 2) Character candidate grouping to detect text strings based on joint structural features of text characters in each text string such as character size differences, distances between neighboring characters, and character alignment. By assuming that a text string has at least three characters, we propose two algorithms of text string detection: 1) adjacent character grouping method, and 2) text line grouping method. The adjacent character grouping method calculates the sibling groups of each character candidate as string segments and then merges the intersecting sibling groups into text string. The text line grouping method performs Hough transform to fit text line among the centroids of text candidates. Each fitted text line describes the orientation of a potential text string. The detected text string is presented by a rectangle region covering all characters whose centroids are cascaded in its text line. To improve efficiency and accuracy, our algorithms are carried out in multi-scales. The proposed methods outperform the state-of-the-art results on the public Robust Reading Dataset which contains text only in horizontal orientation. Furthermore, the effectiveness of our methods to detect text strings with arbitrary orientations is evaluated on the Oriented Scene Text Dataset collected by ourselves containing text strings in non-horizontal orientations. PMID:21411405

  6. Text string detection from natural scenes by structure-based partition and grouping.

    PubMed

    Yi, Chucai; Tian, YingLi

    2011-09-01

    Text information in natural scene images serves as important clues for many image-based applications such as scene understanding, content-based image retrieval, assistive navigation, and automatic geocoding. However, locating text from a complex background with multiple colors is a challenging task. In this paper, we explore a new framework to detect text strings with arbitrary orientations in complex natural scene images. Our proposed framework of text string detection consists of two steps: 1) image partition to find text character candidates based on local gradient features and color uniformity of character components and 2) character candidate grouping to detect text strings based on joint structural features of text characters in each text string such as character size differences, distances between neighboring characters, and character alignment. By assuming that a text string has at least three characters, we propose two algorithms of text string detection: 1) adjacent character grouping method and 2) text line grouping method. The adjacent character grouping method calculates the sibling groups of each character candidate as string segments and then merges the intersecting sibling groups into text string. The text line grouping method performs Hough transform to fit text line among the centroids of text candidates. Each fitted text line describes the orientation of a potential text string. The detected text string is presented by a rectangle region covering all characters whose centroids are cascaded in its text line. To improve efficiency and accuracy, our algorithms are carried out in multi-scales. The proposed methods outperform the state-of-the-art results on the public Robust Reading Dataset, which contains text only in horizontal orientation. Furthermore, the effectiveness of our methods to detect text strings with arbitrary orientations is evaluated on the Oriented Scene Text Dataset collected by ourselves containing text strings in nonhorizontal orientations.

  7. Analysis of acetabular orientation and femoral anteversion using images of three-dimensional reconstructed bone models.

    PubMed

    Park, Jaeyeong; Kim, Jun-Young; Kim, Hyun Deok; Kim, Young Cheol; Seo, Anna; Je, Minkyu; Mun, Jong Uk; Kim, Bia; Park, Il Hyung; Kim, Shin-Yoon

    2017-05-01

    Radiographic measurements using two-dimensional (2D) plain radiographs or planes from computed tomography (CT) scans have several drawbacks, while measurements using images of three-dimensional (3D) reconstructed bone models can provide more consistent anthropometric information. We compared the consistency of results using measurements based on images of 3D reconstructed bone models (3D measurements) with those using planes from CT scans (measurements using 2D slice images). Ninety-six of 561 patients who had undergone deep vein thrombosis-CT between January 2013 and November 2014 were randomly selected. We evaluated measurements using 2D slice images and 3D measurements. The images used for 3D reconstruction of bone models were obtained and measured using [Formula: see text] and [Formula: see text] (Materialize, Leuven, Belgium). The mean acetabular inclination, acetabular anteversion and femoral anteversion values on 2D slice images were 42.01[Formula: see text], 18.64[Formula: see text] and 14.44[Formula: see text], respectively, while those using images of 3D reconstructed bone models were 52.80[Formula: see text], 14.98[Formula: see text] and 17.26[Formula: see text]. Intra-rater reliabilities for acetabular inclination, acetabular anteversion, and femoral anteversion on 2D slice images were 0.55, 0.81, and 0.85, respectively, while those for 3D measurements were 0.98, 0.99, and 0.98. Inter-rater reliabilities for acetabular inclination, acetabular anteversion and femoral anteversion on 2D slice images were 0.48, 0.86, and 0.84, respectively, while those for 3D measurements were 0.97, 0.99, and 0.97. The differences between the two measurements are explained by the use of different tools. However, more consistent measurements were possible using the images of 3D reconstructed bone models. Therefore, 3D measurement can be a good alternative to measurement using 2D slice images.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myronakis, M; Cai, W; Dhou, S

    Purpose: To design a comprehensive open-source, publicly available, graphical user interface (GUI) to facilitate the configuration, generation, processing and use of the 4D Extended Cardiac-Torso (XCAT) phantom. Methods: The XCAT phantom includes over 9000 anatomical objects as well as respiratory, cardiac and tumor motion. It is widely used for research studies in medical imaging and radiotherapy. The phantom generation process involves the configuration of a text script to parameterize the geometry, motion, and composition of the whole body and objects within it, and to generate simulated PET or CT images. To avoid the need for manual editing or script writing,more » our MATLAB-based GUI uses slider controls, drop-down lists, buttons and graphical text input to parameterize and process the phantom. Results: Our GUI can be used to: a) generate parameter files; b) generate the voxelized phantom; c) combine the phantom with a lesion; d) display the phantom; e) produce average and maximum intensity images from the phantom output files; f) incorporate irregular patient breathing patterns; and f) generate DICOM files containing phantom images. The GUI provides local help information using tool-tip strings on the currently selected phantom, minimizing the need for external documentation. The DICOM generation feature is intended to simplify the process of importing the phantom images into radiotherapy treatment planning systems or other clinical software. Conclusion: The GUI simplifies and automates the use of the XCAT phantom for imaging-based research projects in medical imaging or radiotherapy. This has the potential to accelerate research conducted with the XCAT phantom, or to ease the learning curve for new users. This tool does not include the XCAT phantom software itself. We would like to acknowledge funding from MRA, Varian Medical Systems Inc.« less

  9. Representation of scientific methodology in secondary science textbooks

    NASA Astrophysics Data System (ADS)

    Binns, Ian C.

    The purpose of this investigation was to assess the representation of scientific methodology in secondary science textbooks. More specifically, this study looked at how textbooks introduced scientific methodology and to what degree the examples from the rest of the textbook, the investigations, and the images were consistent with the text's description of scientific methodology, if at all. The sample included eight secondary science textbooks from two publishers, McGraw-Hill/Glencoe and Harcourt/Holt, Rinehart & Winston. Data consisted of all student text and teacher text that referred to scientific methodology. Second, all investigations in the textbooks were analyzed. Finally, any images that depicted scientists working were also collected and analyzed. The text analysis and activity analysis used the ethnographic content analysis approach developed by Altheide (1996). The rubrics used for the text analysis and activity analysis were initially guided by the Benchmarks (AAAS, 1993), the NSES (NRC, 1996), and the nature of science literature. Preliminary analyses helped to refine each of the rubrics and grounded them in the data. Image analysis used stereotypes identified in the DAST literature. Findings indicated that all eight textbooks presented mixed views of scientific methodology in their initial descriptions. Five textbooks placed more emphasis on the traditional view and three placed more emphasis on the broad view. Results also revealed that the initial descriptions, examples, investigations, and images all emphasized the broad view for Glencoe Biology and the traditional view for Chemistry: Matter and Change. The initial descriptions, examples, investigations, and images in the other six textbooks were not consistent. Overall, the textbook with the most appropriate depiction of scientific methodology was Glencoe Biology and the textbook with the least appropriate depiction of scientific methodology was Physics: Principles and Problems. These findings suggest that compared to earlier investigations, textbooks have begun to improve in how they represent scientific methodology. However, there is still much room for improvement. Future research needs to consider how textbooks impact teachers' and students' understandings of scientific methodology.

  10. Possible costs associated with investigating and mitigating geologic hazards in rural areas of western San Mateo County, California with a section on using the USGS website to determine the cost of developing property for residences in rural parts of San Mateo County, California

    USGS Publications Warehouse

    Brabb, Earl E.; Roberts, Sebastian; Cotton, William R.; Kropp, Alan L.; Wright, Robert H.; Zinn, Erik N.; Digital database by Roberts, Sebastian; Mills, Suzanne K.; Barnes, Jason B.; Marsolek, Joanna E.

    2000-01-01

    This publication consists of a digital map database on a geohazards web site, http://kaibab.wr.usgs.gov/geohazweb/intro.htm, this text, and 43 digital map images available for downloading at this site. The report is stored as several digital files, in ARC export (uncompressed) format for the database, and Postscript and PDF formats for the map images. Several of the source data layers for the images have already been released in other publications by the USGS and are available for downloading on the Internet. These source layers are not included in this digital database, but rather a reference is given for the web site where the data can be found in digital format. The exported ARC coverages and grids lie in UTM zone 10 projection. The pamphlet, which only describes the content and character of the digital map database, is included as Postscript, PDF, and ASCII text files and is also available on paper as USGS Open-File Report 00-127. The full versatility of the spatial database is realized by importing the ARC export files into ARC/INFO or an equivalent GIS. Other GIS packages, including MapInfo and ARCVIEW, can also use the ARC export files. The Postscript map image can be used for viewing or plotting in computer systems with sufficient capacity, and the considerably smaller PDF image files can be viewed or plotted in full or in part from Adobe ACROBAT software running on Macintosh, PC, or UNIX platforms.

  11. A segmentation algorithm based on image projection for complex text layout

    NASA Astrophysics Data System (ADS)

    Zhu, Wangsheng; Chen, Qin; Wei, Chuanyi; Li, Ziyang

    2017-10-01

    Segmentation algorithm is an important part of layout analysis, considering the efficiency advantage of the top-down approach and the particularity of the object, a breakdown of projection layout segmentation algorithm. Firstly, the algorithm will algorithm first partitions the text image, and divided into several columns, then for each column scanning projection, the text image is divided into several sub regions through multiple projection. The experimental results show that, this method inherits the projection itself and rapid calculation speed, but also can avoid the effect of arc image information page segmentation, and also can accurate segmentation of the text image layout is complex.

  12. The Database Business: Managing Today--Planning for Tomorrow. Quality Assurance of Text and Image Databases at the U.S. Patent and Trademark Office.

    ERIC Educational Resources Information Center

    Grooms, David W.

    1988-01-01

    Discusses the quality controls imposed on text and image data that is currently being converted from paper to digital images by the Patent and Trademark Office. The methods of inspection used on text and on images are described, and the quality of the data delivered thus far is discussed. (CLB)

  13. Complex Event Processing for Content-Based Text, Image, and Video Retrieval

    DTIC Science & Technology

    2016-06-01

    NY): Wiley- Interscience; 2000. Feldman R, Sanger J. The text mining handbook: advanced approaches in analyzing unstructured data. New York (NY...ARL-TR-7705 ● JUNE 2016 US Army Research Laboratory Complex Event Processing for Content-Based Text , Image, and Video Retrieval...ARL-TR-7705 ● JUNE 2016 US Army Research Laboratory Complex Event Processing for Content-Based Text , Image, and Video Retrieval

  14. Medical Image Encryption: An Application for Improved Padding Based GGH Encryption Algorithm

    PubMed Central

    Sokouti, Massoud; Zakerolhosseini, Ali; Sokouti, Babak

    2016-01-01

    Medical images are regarded as important and sensitive data in the medical informatics systems. For transferring medical images over an insecure network, developing a secure encryption algorithm is necessary. Among the three main properties of security services (i.e., confidentiality, integrity, and availability), the confidentiality is the most essential feature for exchanging medical images among physicians. The Goldreich Goldwasser Halevi (GGH) algorithm can be a good choice for encrypting medical images as both the algorithm and sensitive data are represented by numeric matrices. Additionally, the GGH algorithm does not increase the size of the image and hence, its complexity will remain as simple as O(n2). However, one of the disadvantages of using the GGH algorithm is the Chosen Cipher Text attack. In our strategy, this shortcoming of GGH algorithm has been taken in to consideration and has been improved by applying the padding (i.e., snail tour XORing), before the GGH encryption process. For evaluating their performances, three measurement criteria are considered including (i) Number of Pixels Change Rate (NPCR), (ii) Unified Average Changing Intensity (UACI), and (iii) Avalanche effect. The results on three different sizes of images showed that padding GGH approach has improved UACI, NPCR, and Avalanche by almost 100%, 35%, and 45%, respectively, in comparison to the standard GGH algorithm. Also, the outcomes will make the padding GGH resist against the cipher text, the chosen cipher text, and the statistical attacks. Furthermore, increasing the avalanche effect of more than 50% is a promising achievement in comparison to the increased complexities of the proposed method in terms of encryption and decryption processes. PMID:27857824

  15. Document Examination: Applications of Image Processing Systems.

    PubMed

    Kopainsky, B

    1989-12-01

    Dealing with images is a familiar business for an expert in questioned documents: microscopic, photographic, infrared, and other optical techniques generate images containing the information he or she is looking for. A recent method for extracting most of this information is digital image processing, ranging from the simple contrast and contour enhancement to the advanced restoration of blurred texts. When combined with a sophisticated physical imaging system, an image pricessing system has proven to be a powerful and fast tool for routine non-destructive scanning of suspect documents. This article reviews frequent applications, comprising techniques to increase legibility, two-dimensional spectroscopy (ink discrimination, alterations, erased entries, etc.), comparison techniques (stamps, typescript letters, photo substitution), and densitometry. Computerized comparison of handwriting is not included. Copyright © 1989 Central Police University.

  16. Unveiling the Secrets of Archimedes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, Manisha

    2008-03-13

    Progress has been made at the Stanford Linear Accelerator Center (SLAC) toward deciphering the remaining 10-20% of ancient Greek text contained in the Archimedes palimpsest. The text is known to contain valuable works by the mathematician, including the Method of Mechanical Theorems, the Equilibrium of Planes, On Floating Bodies, and several diagrams. The only surviving copy of the text was recycled into a prayer book in the Middle Ages. The ink used to write on the goat skin parchment is partly composed of iron, which is visible by x-ray radiation. To image the palimpsest pages, the parchment was framed andmore » placed in a stage that moved according to the raster method. When an x-ray beam was incident upon the parchment the iron in the ink was detected by a germanium detector. The resultant signal was converted to a gray-scale image. It was extremely important that each line of data was well aligned with the line that came before it. The objectives of this experiment were to determine the best parameters for producing well-aligned images and to reduce the scanning time. Imaging half a page of parchment during previous beam time for this project was achieved in thirty hours. Equations were produced to evaluate count time, shutter time, and the number of pixels in this experiment. On Beamline 6-2 at the Stanford Synchrotron Radiation Laboratory (SSRL), actual scanning time was reduced by one fourth. The remaining pages were successfully imaged and sent to ancient Greek experts for translation.« less

  17. Unveiling the Secrets of Archimedes

    NASA Astrophysics Data System (ADS)

    Turner, Manisha

    2008-03-01

    Progress has been made at the Stanford Linear Accelerator Center (SLAC) toward deciphering the remaining 10-20% of ancient Greek text contained in the Archimedes palimpsest. The text is known to contain valuable works by the mathematician, including the Method of Mechanical Theorems, the Equilibrium of Planes, On Floating Bodies, and several diagrams. The only surviving copy of the text was recycled into a prayer book in the Middle Ages. The ink used to write on the goat skin parchment is partly composed of iron, which is visible by x-ray radiation. To image the palimpsest pages, the parchment was framed and placed in a stage that moved according to the raster method. When an x-ray beam was incident upon the parchment the iron in the ink was detected by a germanium detector. The resultant signal was converted to a gray-scale image. It was extremely important that each line of data was well aligned with the line that came before it. The objectives of this experiment were to determine the best parameters for producing well-aligned images and to reduce the scanning time. Imaging half a page of parchment during previous beam time for this project was achieved in thirty hours. Equations were produced to evaluate count time, shutter time, and the number of pixels in this experiment. On Beamline 6-2 at the Stanford Synchrotron Radiation Laboratory (SSRL), actual scanning time was reduced by one fourth. The remaining pages were successfully imaged and sent to ancient Greek experts for translation.

  18. StemTextSearch: Stem cell gene database with evidence from abstracts.

    PubMed

    Chen, Chou-Cheng; Ho, Chung-Liang

    2017-05-01

    Previous studies have used many methods to find biomarkers in stem cells, including text mining, experimental data and image storage. However, no text-mining methods have yet been developed which can identify whether a gene plays a positive or negative role in stem cells. StemTextSearch identifies the role of a gene in stem cells by using a text-mining method to find combinations of gene regulation, stem-cell regulation and cell processes in the same sentences of biomedical abstracts. The dataset includes 5797 genes, with 1534 genes having positive roles in stem cells, 1335 genes having negative roles, 1654 genes with both positive and negative roles, and 1274 with an uncertain role. The precision of gene role in StemTextSearch is 0.66, and the recall is 0.78. StemTextSearch is a web-based engine with queries that specify (i) gene, (ii) category of stem cell, (iii) gene role, (iv) gene regulation, (v) cell process, (vi) stem-cell regulation, and (vii) species. StemTextSearch is available through http://bio.yungyun.com.tw/StemTextSearch.aspx. Copyright © 2017. Published by Elsevier Inc.

  19. Digital data from the Questa-San Luis and Santa Fe East helicopter magnetic surveys in Santa Fe and Taos Counties, New Mexico, and Costilla County, Colorado

    USGS Publications Warehouse

    Bankey, Viki; Grauch, V.J.S.; Drenth, B.J.; ,

    2006-01-01

    This report contains digital data, image files, and text files describing data formats and survey procedures for aeromagnetic data collected during high-resolution aeromagnetic surveys in southern Colorado and northern New Mexico in December, 2005. One survey covers the eastern edge of the San Luis basin, including the towns of Questa, New Mexico and San Luis, Colorado. A second survey covers the mountain front east of Santa Fe, New Mexico, including the town of Chimayo and portions of the Pueblos of Tesuque and Nambe. Several derivative products from these data are also presented as grids and images, including reduced-to-pole data and data continued to a reference surface. Images are presented in various formats and are intended to be used as input to geographic information systems, standard graphics software, or map plotting packages.

  20. Recent progress in the development of ISO 19751

    NASA Astrophysics Data System (ADS)

    Farnand, Susan P.; Dalal, Edul N.; Ng, Yee S.

    2006-01-01

    A small number of general visual attributes have been recognized as essential in describing image quality. These include micro-uniformity, macro-uniformity, colour rendition, text and line quality, gloss, sharpness, and spatial adjacency or temporal adjacency attributes. The multiple-part International Standard discussed here was initiated by the INCITS W1 committee on the standardization of office equipment to address the need for unambiguously documented procedures and methods, which are widely applicable over the multiple printing technologies employed in office applications, for the appearance-based evaluation of these visually significant image quality attributes of printed image quality. 1,2 The resulting proposed International Standard, for which ISO/IEC WD 19751-1 3 presents an overview and an outline of the overall procedure and common methods, is based on a proposal that was predicated on the idea that image quality could be described by a small set of broad-based attributes. 4 Five ad hoc teams were established (now six since a sharpness team is in the process of being formed) to generate standards for one or more of these image quality attributes. Updates on the colour rendition, text and line quality, and gloss attributes are provided.

  1. Interactive publications: creation and usage

    NASA Astrophysics Data System (ADS)

    Thoma, George R.; Ford, Glenn; Chung, Michael; Vasudevan, Kirankumar; Antani, Sameer

    2006-02-01

    As envisioned here, an "interactive publication" has similarities to multimedia documents that have been in existence for a decade or more, but possesses specific differentiating characteristics. In common usage, the latter refers to online entities that, in addition to text, consist of files of images and video clips residing separately in databases, rarely providing immediate context to the document text. While an interactive publication has many media objects as does the "traditional" multimedia document, it is a self-contained document, either as a single file with media files embedded within it, or as a "folder" containing tightly linked media files. The main characteristic that differentiates an interactive publication from a traditional multimedia document is that the reader would be able to reuse the media content for analysis and presentation, and to check the underlying data and possibly derive alternative conclusions leading, for example, to more in-depth peer reviews. We have created prototype publications containing paginated text and several media types encountered in the biomedical literature: 3D animations of anatomic structures; graphs, charts and tabular data; cell development images (video sequences); and clinical images such as CT, MRI and ultrasound in the DICOM format. This paper presents developments to date including: a tool to convert static tables or graphs into interactive entities, authoring procedures followed to create prototypes, and advantages and drawbacks of each of these platforms. It also outlines future work including meeting the challenge of network distribution for these large files.

  2. Semi-automatic image personalization tool for variable text insertion and replacement

    NASA Astrophysics Data System (ADS)

    Ding, Hengzhou; Bala, Raja; Fan, Zhigang; Eschbach, Reiner; Bouman, Charles A.; Allebach, Jan P.

    2010-02-01

    Image personalization is a widely used technique in personalized marketing,1 in which a vendor attempts to promote new products or retain customers by sending marketing collateral that is tailored to the customers' demographics, needs, and interests. With current solutions of which we are aware such as XMPie,2 DirectSmile,3 and AlphaPicture,4 in order to produce this tailored marketing collateral, image templates need to be created manually by graphic designers, involving complex grid manipulation and detailed geometric adjustments. As a matter of fact, the image template design is highly manual, skill-demanding and costly, and essentially the bottleneck for image personalization. We present a semi-automatic image personalization tool for designing image templates. Two scenarios are considered: text insertion and text replacement, with the text replacement option not offered in current solutions. The graphical user interface (GUI) of the tool is described in detail. Unlike current solutions, the tool renders the text in 3-D, which allows easy adjustment of the text. In particular, the tool has been implemented in Java, which introduces flexible deployment and eliminates the need for any special software or know-how on the part of the end user.

  3. Text extraction method for historical Tibetan document images based on block projections

    NASA Astrophysics Data System (ADS)

    Duan, Li-juan; Zhang, Xi-qun; Ma, Long-long; Wu, Jian

    2017-11-01

    Text extraction is an important initial step in digitizing the historical documents. In this paper, we present a text extraction method for historical Tibetan document images based on block projections. The task of text extraction is considered as text area detection and location problem. The images are divided equally into blocks and the blocks are filtered by the information of the categories of connected components and corner point density. By analyzing the filtered blocks' projections, the approximate text areas can be located, and the text regions are extracted. Experiments on the dataset of historical Tibetan documents demonstrate the effectiveness of the proposed method.

  4. A text zero-watermarking method based on keyword dense interval

    NASA Astrophysics Data System (ADS)

    Yang, Fan; Zhu, Yuesheng; Jiang, Yifeng; Qing, Yin

    2017-07-01

    Digital watermarking has been recognized as a useful technology for the copyright protection and authentication of digital information. However, rarely did the former methods focus on the key content of digital carrier. The idea based on the protection of key content is more targeted and can be considered in different digital information, including text, image and video. In this paper, we use text as research object and a text zero-watermarking method which uses keyword dense interval (KDI) as the key content is proposed. First, we construct zero-watermarking model by introducing the concept of KDI and giving the method of KDI extraction. Second, we design detection model which includes secondary generation of zero-watermark and the similarity computing method of keyword distribution. Besides, experiments are carried out, and the results show that the proposed method gives better performance than other available methods especially in the attacks of sentence transformation and synonyms substitution.

  5. After APOD: From the Website to the Classroom and Beyond

    NASA Astrophysics Data System (ADS)

    Wilson, Teresa; APOD

    2017-01-01

    Astronomy Picture of the Day (APOD) images may start on the apod.nasa.gov website, but their reach goes much further than the individual sitting at their computer screen. They provoke questions that then prompts the reader to email the authors; teachers use the images in their classrooms; students use them in their projects. This presentation will take a look at some of the work done using APOD images and text, including public outreach via middle school presentations and email communications, and academic uses beyond astronomy such as lesson plans on atmospheric refraction and even plagiarism, copyright and fair use.

  6. Towards the intrahour forecasting of direct normal irradiance using sky-imaging data.

    PubMed

    Nou, Julien; Chauvin, Rémi; Eynard, Julien; Thil, Stéphane; Grieu, Stéphane

    2018-04-01

    Increasing power plant efficiency through improved operation is key in the development of Concentrating Solar Power (CSP) technologies. To this end, one of the most challenging topics remains accurately forecasting the solar resource at a short-term horizon. Indeed, in CSP plants, production is directly impacted by both the availability and variability of the solar resource and, more specifically, by Direct Normal Irradiance (DNI). The present paper deals with a new approach to the intrahour forecasting (the forecast horizon [Formula: see text] is up to [Formula: see text] ahead) of DNI, taking advantage of the fact that this quantity can be split into two terms, i.e. clear-sky DNI and the clear sky index. Clear-sky DNI is forecasted from DNI measurements, using an empirical model (Ineichen and Perez, 2002) combined with a persistence of atmospheric turbidity. Moreover, in the framework of the CSPIMP (Concentrating Solar Power plant efficiency IMProvement) research project, PROMES-CNRS has developed a sky imager able to provide High Dynamic Range (HDR) images. So, regarding the clear-sky index, it is forecasted from sky-imaging data, using an Adaptive Network-based Fuzzy Inference System (ANFIS). A hybrid algorithm that takes inspiration from the classification algorithm proposed by Ghonima et al. (2012) when clear-sky anisotropy is known and from the hybrid thresholding algorithm proposed by Li et al. (2011) in the opposite case has been developed to the detection of clouds. Performance is evaluated via a comparative study in which persistence models - either a persistence of DNI or a persistence of the clear-sky index - are included. Preliminary results highlight that the proposed approach has the potential to outperform these models (both persistence models achieve similar performance) in terms of forecasting accuracy: over the test data used, RMSE (the Root Mean Square Error) is reduced of about [Formula: see text], with [Formula: see text], and [Formula: see text], with [Formula: see text].

  7. Early Detection | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"171","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Early Detection Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Early Detection Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"Early

  8. How concept images affect students' interpretations of Newton's method

    NASA Astrophysics Data System (ADS)

    Engelke Infante, Nicole; Murphy, Kristen; Glenn, Celeste; Sealey, Vicki

    2018-07-01

    Knowing when students have the prerequisite knowledge to be able to read and understand a mathematical text is a perennial concern for instructors. Using text describing Newton's method and Vinner's notion of concept image, we exemplify how prerequisite knowledge influences understanding. Through clinical interviews with first-semester calculus students, we determined how evoked concept images of tangent lines and roots contributed to students' interpretation and application of Newton's method. Results show that some students' concept images of root and tangent line developed throughout the interview process, and most students were able to adequately interpret the text on Newton's method. However, students with insufficient concept images of tangent line and students who were unwilling or unable to modify their concept images of tangent line after reading the text were not successful in interpreting Newton's method.

  9. System for line drawings interpretation

    NASA Astrophysics Data System (ADS)

    Boatto, L.; Consorti, Vincenzo; Del Buono, Monica; Eramo, Vincenzo; Esposito, Alessandra; Melcarne, F.; Meucci, Mario; Mosciatti, M.; Tucci, M.; Morelli, Arturo

    1992-08-01

    This paper describes an automatic system that extracts information from line drawings, in order to feed CAD or GIS systems. The line drawings that we analyze contain interconnected thin lines, dashed lines, text, and symbols. Characters and symbols may overlap with lines. Our approach is based on the properties of the run representation of a binary image that allow giving the image a graph structure. Using this graph structure, several algorithms have been designed to identify, directly in the raster image, straight segments, dashed lines, text, symbols, hatching lines, etc. Straight segments and dashed lines are converted into vectors, with high accuracy and good noise immunity. Characters and symbols are recognized by means of a recognizer, specifically developed for this application, designed to be insensitive to rotation and scaling. Subsequent processing steps include an `intelligent'' search through the graph in order to detect closed polygons, dashed lines, text strings, and other higher-level logical entities, followed by the identification of relationships (adjacency, inclusion, etc.) between them. Relationships are further translated into a formal description of the drawing. The output of the system can be used as input to a Geographic Information System package. The system is currently used by the Italian Land Register Authority to process cadastral maps.

  10. The content of social media's shared images about Ebola: a retrospective study.

    PubMed

    Seltzer, E K; Jean, N S; Kramer-Golinkoff, E; Asch, D A; Merchant, R M

    2015-09-01

    Social media have strongly influenced awareness and perceptions of public health emergencies, but a considerable amount of social media content is now carried through images, rather than just text. This study's objective is to explore how image-sharing platforms are used for information dissemination in public health emergencies. Retrospective review of images posted on two popular image-sharing platforms to characterize public discourse about Ebola. Using the keyword '#ebola' we identified a 1% sample of images posted on Instagram and Flickr across two sequential weeks in November 2014. Images from both platforms were independently coded by two reviewers and characterized by themes. We reviewed 1217 images posted on Instagram and Flickr and identified themes. Nine distinct themes were identified. These included: images of health care workers and professionals [308 (25%)], West Africa [75 (6%)], the Ebola virus [59 (5%)], and artistic renderings of Ebola [64 (5%)]. Also identified were images with accompanying embedded text related to Ebola and associated: facts [68 (6%)], fears [40 (3%)], politics [46 (4%)], and jokes [284 (23%)]. Several [273 (22%)] images were unrelated to Ebola or its sequelae. Instagram images were primarily coded as jokes [255 (42%)] or unrelated [219 (36%)], while Flickr images primarily depicted health care workers and other professionals [281 (46%)] providing care or other services for prevention or treatment. Image sharing platforms are being used for information exchange about public health crises, like Ebola. Use differs by platform and discerning these differences can help inform future uses for health care professionals and researchers seeking to assess public fears and misinformation or provide targeted education/awareness interventions. Copyright © 2015 The Royal Institute of Public Health. All rights reserved.

  11. The Art of Astronomy: A New General Education Course for Non-Science Majors

    NASA Astrophysics Data System (ADS)

    Pilachowski, Catherine A.; van Zee, Liese

    2017-01-01

    The Art of Astronomy is a new general education course developed at Indiana University. The topic appeals to a broad range of undergraduates and the course gives students the tools to understand and appreciate astronomical images in a new way. The course explores the science of imaging the universe and the technology that makes the images possible. Topics include the night sky, telescopes and cameras, light and color, and the science behind the images. Coloring the Universe: An Insider's Look at Making Spectacular Images of Space" by T. A. Rector, K. Arcand, and M. Watzke serves as the basic text for the course, supplemented by readings from the web. Through the course, students participate in exploration activities designed to help them first to understand astronomy images, and then to create them. Learning goals include an understanding of scientific inquiry, an understanding of the basics of imaging science as applied in astronomy, a knowledge of the electromagnetic spectrum and how observations at different wavelengths inform us about different environments in the universe, and an ability to interpret astronomical images to learn about the universe and to model and understand the physical world.

  12. Exploring access to scientific literature using content-based image retrieval

    NASA Astrophysics Data System (ADS)

    Deserno, Thomas M.; Antani, Sameer; Long, Rodney

    2007-03-01

    The number of articles published in the scientific medical literature is continuously increasing, and Web access to the journals is becoming common. Databases such as SPIE Digital Library, IEEE Xplore, indices such as PubMed, and search engines such as Google provide the user with sophisticated full-text search capabilities. However, information in images and graphs within these articles is entirely disregarded. In this paper, we quantify the potential impact of using content-based image retrieval (CBIR) to access this non-text data. Based on the Journal Citations Report (JCR), the journal Radiology was selected for this study. In 2005, 734 articles were published electronically in this journal. This included 2,587 figures, which yields a rate of 3.52 figures per article. Furthermore, 56.4% of these figures are composed of several individual panels, i.e. the figure combines different images and/or graphs. According to the Image Cross-Language Evaluation Forum (ImageCLEF), the error rate of automatic identification of medical images is about 15%. Therefore, it is expected that, by applying ImageCLEF-like techniques, already 95.5% of articles could be retrieved by means of CBIR. The challenge for CBIR in scientific literature, however, is the use of local texture properties to analyze individual image panels in composite illustrations. Using local features for content-based image representation, 8.81 images per article are available, and the predicted correctness rate may increase to 98.3%. From this study, we conclude that CBIR may have a high impact in medical literature research and suggest that additional research in this area is warranted.

  13. Teaching Information Literacy to Generation Y.

    ERIC Educational Resources Information Center

    Manuel, Kate

    2002-01-01

    Discusses how to change library information literacy classes for Generation Y students (born after 1981) to accommodate their learning styles and preferences, based on experiences at California State University, Hayward. Topics include positive outlooks toward technology; orientation toward images, not linear text; low thresholds for boredom and…

  14. Textbook of Uroradiology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunnick, N.R.; McCallum, R.W.; Sandler, C.M.

    1991-01-01

    This book provides the practicing radiologist and the radiology resident with a comprehensive text of manageable size that integrates all aspects of adult uroradiology. Topics covered include: anatomy, embryology, and cogenital anomalies of the urinary tract; techniques for imaging of the urinary tract; contrast material; pathologies; and interventional uroradiology.

  15. Community Oncology and Prevention Trials | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"168","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Early Detection Research Group Homepage Image","field_file_image_title_text[und][0][value]":"Early Detection Research Group Homepage Image","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Early Detection Research Group Homepage Image","title":"Early

  16. Archive of Boomer seismic reflection data: collected during USGS Cruise 96CCT01, nearshore south central South Carolina coast, June 26 - July 1, 1996

    USGS Publications Warehouse

    Calderon, Karynna; Dadisman, Shawn V.; Kindinger, Jack G.; Flocks, James G.; Wiese, Dana S.

    2003-01-01

    This archive consists of marine seismic reflection profile data collected in four survey areas from southeast of Charleston Harbor to the mouth of the North Edisto River of South Carolina. These data were acquired June 26 - July 1, 1996, aboard the R/V G.K. Gilbert. Included here are data in a variety of formats including binary, American Standard Code for Information Interchange (ASCII), Hyper Text Markup Language (HTML), Portable Document Format (PDF), Rich Text Format (RTF), Graphics Interchange Format (GIF) and Joint Photographic Experts Group (JPEG) images, and shapefiles. Binary data are in Society of Exploration Geophysicists (SEG) SEG-Y format and may be downloaded for further processing or display. Reference maps and GIF images of the profiles may be viewed with a web browser. The Geographic Information Systems (GIS) map documents provided were created with Environmental Systems Research Institute (ESRI) GIS software ArcView 3.2 and 8.1.

  17. Detection of text strings from mixed text/graphics images

    NASA Astrophysics Data System (ADS)

    Tsai, Chien-Hua; Papachristou, Christos A.

    2000-12-01

    A robust system for text strings separation from mixed text/graphics images is presented. Based on a union-find (region growing) strategy the algorithm is thus able to classify the text from graphics and adapts to changes in document type, language category (e.g., English, Chinese and Japanese), text font style and size, and text string orientation within digital images. In addition, it allows for a document skew that usually occurs in documents, without skew correction prior to discrimination while these proposed methods such a projection profile or run length coding are not always suitable for the condition. The method has been tested with a variety of printed documents from different origins with one common set of parameters, and the experimental results of the performance of the algorithm in terms of computational efficiency are demonstrated by using several tested images from the evaluation.

  18. Text image authenticating algorithm based on MD5-hash function and Henon map

    NASA Astrophysics Data System (ADS)

    Wei, Jinqiao; Wang, Ying; Ma, Xiaoxue

    2017-07-01

    In order to cater to the evidentiary requirements of the text image, this paper proposes a fragile watermarking algorithm based on Hash function and Henon map. The algorithm is to divide a text image into parts, get flippable pixels and nonflippable pixels of every lump according to PSD, generate watermark of non-flippable pixels with MD5-Hash, encrypt watermark with Henon map and select embedded blocks. The simulation results show that the algorithm with a good ability in tampering localization can be used to authenticate and forensics the authenticity and integrity of text images

  19. Data and image transfer using mobile phones to strengthen microscopy-based diagnostic services in low and middle income country laboratories.

    PubMed

    Tuijn, Coosje J; Hoefman, Bas J; van Beijma, Hajo; Oskam, Linda; Chevrollier, Nicolas

    2011-01-01

    The emerging market of mobile phone technology and its use in the health sector is rapidly expanding and connecting even the most remote areas of world. Distributing diagnostic images over the mobile network for knowledge sharing, feedback or quality control is a logical innovation. To determine the feasibility of using mobile phones for capturing microscopy images and transferring these to a central database for assessment, feedback and educational purposes. A feasibility study was carried out in Uganda. Images of microscopy samples were taken using a prototype connector that could fix a variety of mobile phones to a microscope. An Information Technology (IT) platform was set up for data transfer from a mobile phone to a website, including feedback by text messaging to the end user. Clear images were captured using mobile phone cameras of 2 megapixels (MP) up to 5MP. Images were sent by mobile Internet to a website where they were visualized and feedback could be provided to the sender by means of text message. The process of capturing microscopy images on mobile phones, relaying them to a central review website and feeding back to the sender is feasible and of potential benefit in resource poor settings. Even though the system needs further optimization, it became evident from discussions with stakeholders that there is a demand for this type of technology.

  20. Data Science Bowl Launched to Improve Lung Cancer Screening | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"2078","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Data Science Bowl Logo","field_file_image_title_text[und][0][value]":"Data Science Bowl Logo","field_folder[und]":"76"},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"Data Science Bowl

  1. Launch Control System Software Development System Automation Testing

    NASA Technical Reports Server (NTRS)

    Hwang, Andrew

    2017-01-01

    The Spaceport Command and Control System (SCCS) is the National Aeronautics and Space Administration's (NASA) launch control system for the Orion capsule and Space Launch System, the next generation manned rocket currently in development. This system requires high quality testing that will measure and test the capabilities of the system. For the past two years, the Exploration and Operations Division at Kennedy Space Center (KSC) has assigned a group including interns and full-time engineers to develop automated tests to save the project time and money. The team worked on automating the testing process for the SCCS GUI that would use streamed simulated data from the testing servers to produce data, plots, statuses, etc. to the GUI. The software used to develop automated tests included an automated testing framework and an automation library. The automated testing framework has a tabular-style syntax, which means the functionality of a line of code must have the appropriate number of tabs for the line to function as intended. The header section contains either paths to custom resources or the names of libraries being used. The automation library contains functionality to automate anything that appears on a desired screen with the use of image recognition software to detect and control GUI components. The data section contains any data values strictly created for the current testing file. The body section holds the tests that are being run. The function section can include any number of functions that may be used by the current testing file or any other file that resources it. The resources and body section are required for all test files; the data and function sections can be left empty if the data values and functions being used are from a resourced library or another file. To help equip the automation team with better tools, the Project Lead of the Automated Testing Team, Jason Kapusta, assigned the task to install and train an optical character recognition (OCR) tool to Brandon Echols, a fellow intern, and I. The purpose of the OCR tool is to analyze an image and find the coordinates of any group of text. Some issues that arose while installing the OCR tool included the absence of certain libraries needed to train the tool and an outdated software version. We eventually resolved the issues and successfully installed the OCR tool. Training the tool required many images and different fonts and sizes, but in the end the tool learned to accurately decipher the text in the images and their coordinates. The OCR tool produced a file that contained significant metadata for each section of text, but only the text and coordinates of the text was required for our purpose. The team made a script to parse the information we wanted from the OCR file to a different file that would be used by automation functions within the automated framework. Since a majority of development and testing for the automated test cases for the GUI in question has been done using live simulated data on the workstations at the Launch Control Center (LCC), a large amount of progress has been made. As of this writing, about 60% of all of automated testing has been implemented. Additionally, the OCR tool will help make our automated tests more robust due to the tool's text recognition being highly scalable to different text fonts and text sizes. Soon we will have the whole test system automated, allowing for more full-time engineers working on development projects.

  2. Radioisotope studies in cardiology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biersack, H.J.; Cox, P.H.

    1985-01-01

    In this text, reviews of all available techniques in this field have been collected, including methods that are still in the developmental stage. After a discussion of the pathophysiology of myocardial perfusion, metabolism, and recent developments in instrumentation, particular chapters are devoted to data processing, radipharmaceuticals, and labelled metabolites. Special references are made to cardiac blood-pool imaging, including evaluations of global and regional ventricular functions and reguritation volumes.

  3. Europeana: Think Culture

    ERIC Educational Resources Information Center

    Kail, Candice

    2011-01-01

    Europeana: Think Culture (http://www.europeana.eu) is a wonderful cultural repository. It includes more than 15 million items (images, text, audio, and video) from 1,500 European institutions. Europeana provides access to an abundance of cultural and heritage information and knowledge. Because Europeana has partnered with and brought together so…

  4. Description of the IV + V System Software Package.

    ERIC Educational Resources Information Center

    Microcomputers for Information Management: An International Journal for Library and Information Services, 1984

    1984-01-01

    Describes the IV + V System, a software package designed by the Institut fur Maschinelle Dokumentation for the United Nations General Information Programme and UNISIST to support automation of local information and documentation services. Principle program features and functions outlined include input/output, databank, text image, output, and…

  5. Assessing the Validity of Discourse Analysis: Transdisciplinary Convergence

    ERIC Educational Resources Information Center

    Jaipal-Jamani, Kamini

    2014-01-01

    Research studies using discourse analysis approaches make claims about phenomena or issues based on interpretation of written or spoken text, which includes images and gestures. How are findings/interpretations from discourse analysis validated? This paper proposes transdisciplinary convergence as a way to validate discourse analysis approaches to…

  6. Seafloor video footage and still-frame grabs from U.S. Geological Survey cruises in Hawaiian nearshore waters

    USGS Publications Warehouse

    Gibbs, Ann E.; Cochran, Susan A.; Tierney, Peter W.

    2013-01-01

    Underwater video footage was collected in nearshore waters (<60-meter depth) off the Hawaiian Islands from 2002 to 2011 as part of the U.S. Geological Survey (USGS) Coastal and Marine Geology Program's Pacific Coral Reef Project, to improve seafloor characterization and for the development and ground-truthing of benthic-habitat maps. This report includes nearly 53 hours of digital underwater video footage collected during four USGS cruises and more than 10,200 still images extracted from the videos, including still frames from every 10 seconds along transect lines, and still frames showing both an overview and a near-bottom view from fixed stations. Environmental Systems Research Institute (ESRI) shapefiles of individual video and still-image locations, and Google Earth kml files with explanatory text and links to the video and still images, are included. This report documents the various camera systems and methods used to collect the videos, and the techniques and software used to convert the analog video tapes into digital data in order to process the images for optimum viewing and to extract the still images, along with a brief summary of each survey cruise.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Y.; Tobias, B.; Chang, Y. -T.

    Electron cyclotron emission (ECE) imaging is a passive radiometric technique that measures electron temperature fluctuations; and microwave imaging reflectometry (MIR) is an active radar imaging technique that measures electron density fluctuations. The microwave imaging diagnostic instruments employing these techniques have made important contributions to fusion science and have been adopted at major fusion facilities worldwide including DIII-D, EAST, ASDEX Upgrade, HL-2A, KSTAR, LHD, and J-TEXT. In this paper, we describe the development status of three major technological advancements: custom mm-wave integrated circuits (ICs), digital beamforming (DBF), and synthetic diagnostic modeling (SDM). These also have the potential to greatly advance microwavemore » fusion plasma imaging, enabling compact and low-noise transceiver systems with real-time, fast tracking ability to address critical fusion physics issues, including ELM suppression and disruptions in the ITER baseline scenario, naturally ELM-free states such as QH-mode, and energetic particle confinement (i.e. Alfven eigenmode stability) in high-performance regimes that include steady-state and advanced tokamak scenarios. Furthermore, these systems are fully compatible with today's most challenging non-inductive heating and current drive systems and capable of operating in harsh environments, making them the ideal approach for diagnosing long-pulse and steady-state tokamaks.« less

  8. Arkansas and Louisiana Aeromagnetic and Gravity Maps and Data - A Website for Distribution of Data

    USGS Publications Warehouse

    Bankey, Viki; Daniels, David L.

    2008-01-01

    This report contains digital data, image files, and text files describing data formats for aeromagnetic and gravity data used to compile the State aeromagnetic and gravity maps of Arkansas and Louisiana. The digital files include grids, images, ArcInfo, and Geosoft compatible files. In some of the data folders, ASCII files with the extension 'txt' describe the format and contents of the data files. Read the 'txt' files before using the data files.

  9. Digital Aeromagnetic Data and Derivative Products from a Helicopter Survey over the Town of Taos and Surrounding Areas, Taos County, New Mexico

    USGS Publications Warehouse

    Bankey, Viki; Grauch, V.J.S.; ,

    2004-01-01

    This report contains digital data, image files, and text files describing data formats and survey procedures for aeromagnetic data collected during a helicopter geophysical survey in northern New Mexico during October 2003. The survey covers the Town of Taos, Taos Pueblo, and surrounding communities in Taos County. Several derivative products from these data are also presented, including reduced-to-pole, horizontal gradient magnitude, and downward continued grids and images.

  10. Digital aeromagnetic data and derivative products from a helicopter survey over the town of Blanca and surrounding areas, Alamosa and Costilla counties, Colorado

    USGS Publications Warehouse

    Bankey, Viki; Grauch, V.J.S.; ,

    2004-01-01

    This CD-ROM contains digital data, image files, and text files describing data formats and survey procedures for aeromagnetic data collected during a helicopter geophysical survey in southern Colorado during October 2003. The survey covers the town of Blanca and surrounding communities in Alamosa and Costilla Counties. Several derivative products from these data are also presented, including reduced-to-pole, horizontal gradient magnitude, and downward continued grids and images.

  11. Sub-diffusive scattering parameter maps recovered using wide-field high-frequency structured light imaging.

    PubMed

    Kanick, Stephen Chad; McClatchy, David M; Krishnaswamy, Venkataramanan; Elliott, Jonathan T; Paulsen, Keith D; Pogue, Brian W

    2014-10-01

    This study investigates the hypothesis that structured light reflectance imaging with high spatial frequency patterns [Formula: see text] can be used to quantitatively map the anisotropic scattering phase function distribution [Formula: see text] in turbid media. Monte Carlo simulations were used in part to establish a semi-empirical model of demodulated reflectance ([Formula: see text]) in terms of dimensionless scattering [Formula: see text] and [Formula: see text], a metric of the first two moments of the [Formula: see text] distribution. Experiments completed in tissue-simulating phantoms showed that simultaneous analysis of [Formula: see text] spectra sampled at multiple [Formula: see text] in the frequency range [0.05-0.5] [Formula: see text] allowed accurate estimation of both [Formula: see text] in the relevant tissue range [0.4-1.8] [Formula: see text], and [Formula: see text] in the range [1.4-1.75]. Pilot measurements of a healthy volunteer exhibited [Formula: see text]-based contrast between scar tissue and surrounding normal skin, which was not as apparent in wide field diffuse imaging. These results represent the first wide-field maps to quantify sub-diffuse scattering parameters, which are sensitive to sub-microscopic tissue structures and composition, and therefore, offer potential for fast diagnostic imaging of ultrastructure on a size scale that is relevant to surgical applications.

  12. Piezoelectric Composite Micromachined Multifrequency Transducers for High-Resolution, High-Contrast Ultrasound Imaging for Improved Prostate Cancer Assessment

    DTIC Science & Technology

    2016-10-01

    abstracts. We have provided basic details in the attached text . Extensive additional data, discussion, and conclusions are included in the attached...are listed in blue, and the following black text details progress towards these tasks. Aim 1) Develop a new type of dual-frequency PC-MUT co...Proceedings of the IEEE International Ultrasonics Symposium, Honolulu, HI, USA, 4–7 December 1990; Volume 2, pp. 799–803. 85. Saitoh, S.; Izumi, M.; Mine

  13. Millimeter-wave imaging of magnetic fusion plasmas: technology innovations advancing physics understanding

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Tobias, B.; Chang, Y.-T.; Yu, J.-H.; Li, M.; Hu, F.; Chen, M.; Mamidanna, M.; Phan, T.; Pham, A.-V.; Gu, J.; Liu, X.; Zhu, Y.; Domier, C. W.; Shi, L.; Valeo, E.; Kramer, G. J.; Kuwahara, D.; Nagayama, Y.; Mase, A.; Luhmann, N. C., Jr.

    2017-07-01

    Electron cyclotron emission (ECE) imaging is a passive radiometric technique that measures electron temperature fluctuations; and microwave imaging reflectometry (MIR) is an active radar imaging technique that measures electron density fluctuations. Microwave imaging diagnostic instruments employing these techniques have made important contributions to fusion science and have been adopted at major fusion facilities worldwide including DIII-D, EAST, ASDEX Upgrade, HL-2A, KSTAR, LHD, and J-TEXT. In this paper, we describe the development status of three major technological advancements: custom mm-wave integrated circuits (ICs), digital beamforming (DBF), and synthetic diagnostic modeling (SDM). These have the potential to greatly advance microwave fusion plasma imaging, enabling compact and low-noise transceiver systems with real-time, fast tracking ability to address critical fusion physics issues, including ELM suppression and disruptions in the ITER baseline scenario, naturally ELM-free states such as QH-mode, and energetic particle confinement (i.e. Alfvén eigenmode stability) in high-performance regimes that include steady-state and advanced tokamak scenarios. Furthermore, these systems are fully compatible with today’s most challenging non-inductive heating and current drive systems and capable of operating in harsh environments, making them the ideal approach for diagnosing long-pulse and steady-state tokamaks.

  14. Millimeter-wave imaging of magnetic fusion plasmas: technology innovations advancing physics understanding

    DOE PAGES

    Wang, Y.; Tobias, B.; Chang, Y. -T.; ...

    2017-03-14

    Electron cyclotron emission (ECE) imaging is a passive radiometric technique that measures electron temperature fluctuations; and microwave imaging reflectometry (MIR) is an active radar imaging technique that measures electron density fluctuations. The microwave imaging diagnostic instruments employing these techniques have made important contributions to fusion science and have been adopted at major fusion facilities worldwide including DIII-D, EAST, ASDEX Upgrade, HL-2A, KSTAR, LHD, and J-TEXT. In this paper, we describe the development status of three major technological advancements: custom mm-wave integrated circuits (ICs), digital beamforming (DBF), and synthetic diagnostic modeling (SDM). These also have the potential to greatly advance microwavemore » fusion plasma imaging, enabling compact and low-noise transceiver systems with real-time, fast tracking ability to address critical fusion physics issues, including ELM suppression and disruptions in the ITER baseline scenario, naturally ELM-free states such as QH-mode, and energetic particle confinement (i.e. Alfven eigenmode stability) in high-performance regimes that include steady-state and advanced tokamak scenarios. Furthermore, these systems are fully compatible with today's most challenging non-inductive heating and current drive systems and capable of operating in harsh environments, making them the ideal approach for diagnosing long-pulse and steady-state tokamaks.« less

  15. Temporal Tuning of Word- and Face-selective Cortex.

    PubMed

    Yeatman, Jason D; Norcia, Anthony M

    2016-11-01

    Sensitivity to temporal change places fundamental limits on object processing in the visual system. An emerging consensus from the behavioral and neuroimaging literature suggests that temporal resolution differs substantially for stimuli of different complexity and for brain areas at different levels of the cortical hierarchy. Here, we used steady-state visually evoked potentials to directly measure three fundamental parameters that characterize the underlying neural response to text and face images: temporal resolution, peak temporal frequency, and response latency. We presented full-screen images of text or a human face, alternated with a scrambled image, at temporal frequencies between 1 and 12 Hz. These images elicited a robust response at the first harmonic that showed differential tuning, scalp topography, and delay for the text and face images. Face-selective responses were maximal at 4 Hz, but text-selective responses, by contrast, were maximal at 1 Hz. The topography of the text image response was strongly left-lateralized at higher stimulation rates, whereas the response to the face image was slightly right-lateralized but nearly bilateral at all frequencies. Both text and face images elicited steady-state activity at more than one apparent latency; we observed early (141-160 msec) and late (>250 msec) text- and face-selective responses. These differences in temporal tuning profiles are likely to reflect differences in the nature of the computations performed by word- and face-selective cortex. Despite the close proximity of word- and face-selective regions on the cortical surface, our measurements demonstrate substantial differences in the temporal dynamics of word- versus face-selective responses.

  16. Text and Image of Advertising in Nigeria: An Enterprise of Socio-Cultural Reproduction

    ERIC Educational Resources Information Center

    Dalamu, Taofeek

    2016-01-01

    The role of language in the construction of socio-cultural reality is inevitable. That is why text is used as a pillar that supports the explication of the intended purpose of images applied in multifaceted ad plates. It is a phenomenal tradition that has remained strong in ad campaigns. Advertisers make images and text as discrete components that…

  17. NIH Seeks Input on In-patient Clinical Research Areas | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"2476","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Aerial view of the National Institutes of Health Clinical Center (Building 10) in Bethesda, Maryland.","field_file_image_title_text[und][0][value]":false},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"Aerial view of

  18. Pictures Worth a Thousand Words: Noncommercial Tobacco Content in the Lesbian, Gay, and Bisexual Press

    PubMed Central

    SMITH, ELIZABETH A.; OFFEN, NAPHTALI; MALONE, RUTH E.

    2009-01-01

    Smoking prevalence in the lesbian, gay, and bisexual (LGB) community is higher than in the mainstream population. The reason is undetermined; however, normalization of tobacco use in the media has been shown to affect smoking rates. To explore whether this might be a factor in the LGB community, we examined noncommercial imagery and text relating to tobacco and smoking in LGB magazines and newspapers. Tobacco-related images were frequent and overwhelmingly positive or neutral about tobacco use. Images frequently associated smoking with celebrities. Text items unrelated to tobacco were often illustrated with smoking imagery. Text items about tobacco were likely to be critical of tobacco use; however, there were three times as many images as text items. The number of image items was not accounted for by the number of text items: nearly three quarters of all tobacco-related images (73.8%) were unassociated with relevant text. Tobacco imagery is pervasive in LGB publications. The predominant message about tobacco use in the LGB press is positive or neutral; tobacco is often glamorized. Noncommercial print images of smoking may normalize it, as movie product placement does. Media advocacy approaches could counter normalization of smoking in LGB-specific media. PMID:17074732

  19. Handwritten text line segmentation by spectral clustering

    NASA Astrophysics Data System (ADS)

    Han, Xuecheng; Yao, Hui; Zhong, Guoqiang

    2017-02-01

    Since handwritten text lines are generally skewed and not obviously separated, text line segmentation of handwritten document images is still a challenging problem. In this paper, we propose a novel text line segmentation algorithm based on the spectral clustering. Given a handwritten document image, we convert it to a binary image first, and then compute the adjacent matrix of the pixel points. We apply spectral clustering on this similarity metric and use the orthogonal kmeans clustering algorithm to group the text lines. Experiments on Chinese handwritten documents database (HIT-MW) demonstrate the effectiveness of the proposed method.

  20. Scalable ranked retrieval using document images

    NASA Astrophysics Data System (ADS)

    Jain, Rajiv; Oard, Douglas W.; Doermann, David

    2013-12-01

    Despite the explosion of text on the Internet, hard copy documents that have been scanned as images still play a significant role for some tasks. The best method to perform ranked retrieval on a large corpus of document images, however, remains an open research question. The most common approach has been to perform text retrieval using terms generated by optical character recognition. This paper, by contrast, examines whether a scalable segmentation-free image retrieval algorithm, which matches sub-images containing text or graphical objects, can provide additional benefit in satisfying a user's information needs on a large, real world dataset. Results on 7 million scanned pages from the CDIP v1.0 test collection show that content based image retrieval finds a substantial number of documents that text retrieval misses, and that when used as a basis for relevance feedback can yield improvements in retrieval effectiveness.

  1. Global and Local Features Based Classification for Bleed-Through Removal

    NASA Astrophysics Data System (ADS)

    Hu, Xiangyu; Lin, Hui; Li, Shutao; Sun, Bin

    2016-12-01

    The text on one side of historical documents often seeps through and appears on the other side, so the bleed-through is a common problem in historical document images. It makes the document images hard to read and the text difficult to recognize. To improve the image quality and readability, the bleed-through has to be removed. This paper proposes a global and local features extraction based bleed-through removal method. The Gaussian mixture model is used to get the global features of the images. Local features are extracted by the patch around each pixel. Then, the extreme learning machine classifier is utilized to classify the scanned images into the foreground text and the bleed-through component. Experimental results on real document image datasets show that the proposed method outperforms the state-of-the-art bleed-through removal methods and preserves the text strokes well.

  2. A framework of text detection and recognition from natural images for mobile device

    NASA Astrophysics Data System (ADS)

    Selmi, Zied; Ben Halima, Mohamed; Wali, Ali; Alimi, Adel M.

    2017-03-01

    On the light of the remarkable audio-visual effect on modern life, and the massive use of new technologies (smartphones, tablets ...), the image has been given a great importance in the field of communication. Actually, it has become the most effective, attractive and suitable means of communication for transmitting information between different people. Of all the various parts of information that can be extracted from the image, our focus will be particularly on the text. Actually, since its detection and recognition in a natural image is a major problem in many applications, the text has drawn the attention of a great number of researchers in recent years. In this paper, we present a framework for text detection and recognition from natural images for mobile devices.

  3. Image Engine: an object-oriented multimedia database for storing, retrieving and sharing medical images and text.

    PubMed Central

    Lowe, H. J.

    1993-01-01

    This paper describes Image Engine, an object-oriented, microcomputer-based, multimedia database designed to facilitate the storage and retrieval of digitized biomedical still images, video, and text using inexpensive desktop computers. The current prototype runs on Apple Macintosh computers and allows network database access via peer to peer file sharing protocols. Image Engine supports both free text and controlled vocabulary indexing of multimedia objects. The latter is implemented using the TView thesaurus model developed by the author. The current prototype of Image Engine uses the National Library of Medicine's Medical Subject Headings (MeSH) vocabulary (with UMLS Meta-1 extensions) as its indexing thesaurus. PMID:8130596

  4. 76 FR 45300 - Notice of Issuance of Materials License SUA-1597 and Record of Decision for Uranerz Energy...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-28

    ... considered but eliminated from detailed analysis include conventional uranium mining and milling, conventional mining and heap leach processing, alternative site location, alternate lixiviants, and alternate...'s Agencywide Document Access and Management System (ADAMS), which provides text and image files of...

  5. Increasing Student Learning through Multimedia Projects.

    ERIC Educational Resources Information Center

    Simkins, Michael; Cole, Karen; Tavalin, Fern; Means, Barbara

    This book discusses enhancing student achievement through project-based learning with multimedia. Chapter 1 describes project-based multimedia learning. Chapter 2 presents a multimedia primer, including the five basic types of media objects (i.e., images, text, sound, motion, and interactivity). Chapter 3 addresses making a real-world connection,…

  6. Web-Based Learning and Instruction Support System for Pneumatics

    ERIC Educational Resources Information Center

    Yen, Chiaming; Li, Wu-Jeng

    2003-01-01

    This research presents a Web-based learning and instructional system for Pneumatics. The system includes course material, remote data acquisition modules, and a pneumatic laboratory set. The course material is in the HTML format accompanied with text, still and animated images, simulation programs, and computer aided design tools. The data…

  7. PACE: A Browsable Graphical Interface.

    ERIC Educational Resources Information Center

    Beheshti, Jamshid; And Others

    1996-01-01

    Describes PACE (Public Access Catalogue Extension), an alternative interface designed to enhance online catalogs by simulating images of books and library shelves to help users browse through the catalog. Results of a test in a college library against a text-based online public access catalog, including student attitudes, are described.…

  8. Internet-Accessible Scholarly Resources for the Humanities and Social Sciences.

    ERIC Educational Resources Information Center

    ACLS Newsletter, 1997

    1997-01-01

    This newsletter focuses on the presentations of a program session on Internet-accessible scholarly resources, held at the 1996 ACLS Annual Meeting. Articles in the newsletter include: "Building the Scene: Words, Images, Data, and Beyond" (David Green); "Electronic Texts: The Promise and the Reality" (Susan Hockey); "Images…

  9. Net Survey: "Top Ten Mistakes" in Academic Web Design.

    ERIC Educational Resources Information Center

    Petrik, Paula

    2000-01-01

    Highlights the top ten mistakes in academic Web design: (1) bloated graphics; (2) scaling images; (3) dense text; (4) lack of contrast; (5) font size; (6) looping animations; (7) courseware authoring software; (8) scrolling/long pages; (9) excessive download; and (10) the nothing site. Includes resources. (CMK)

  10. Influence of Hearing Risk Information on the Motivation and Modification of Personal Listening Device Use.

    PubMed

    Serpanos, Yula C; Berg, Abbey L; Renne, Brittany

    2016-12-01

    The purpose of this study was (a) to investigate the behaviors, knowledge, and motivators associated with personal listening device (PLD) use and (b) to determine the influence of different types of hearing health risk education information (text with or without visual images) on motivation to modify PLD listening use behaviors in young adults. College-age students (N = 523) completed a paper-and-pencil survey tapping their behaviors, knowledge, and motivation regarding listening to music or media at high volume using PLDs. Participants rated their motivation to listen to PLDs at lower volume levels following each of three information sets: text only, behind-the-ear hearing aid image with text, and inner ear hair cell damage image with text. Acoustically pleasing and emotional motives were the most frequently cited (38%-45%) reasons for listening to music or media using a PLD at high volume levels. The behind-the-ear hearing aid image with text information was significantly (p < .0001) more motivating to participants than text alone or the inner ear hair cell damage image with text. Evocative imagery using hearing aids may be an effective approach in hearing protective health campaigns for motivating safer listening practices with PLDs in young adults.

  11. Document image cleanup and binarization

    NASA Astrophysics Data System (ADS)

    Wu, Victor; Manmatha, Raghaven

    1998-04-01

    Image binarization is a difficult task for documents with text over textured or shaded backgrounds, poor contrast, and/or considerable noise. Current optical character recognition (OCR) and document analysis technology do not handle such documents well. We have developed a simple yet effective algorithm for document image clean-up and binarization. The algorithm consists of two basic steps. In the first step, the input image is smoothed using a low-pass filter. The smoothing operation enhances the text relative to any background texture. This is because background texture normally has higher frequency than text does. The smoothing operation also removes speckle noise. In the second step, the intensity histogram of the smoothed image is computed and a threshold automatically selected as follows. For black text, the first peak of the histogram corresponds to text. Thresholding the image at the value of the valley between the first and second peaks of the histogram binarizes the image well. In order to reliably identify the valley, the histogram is smoothed by a low-pass filter before the threshold is computed. The algorithm has been applied to some 50 images from a wide variety of source: digitized video frames, photos, newspapers, advertisements in magazines or sales flyers, personal checks, etc. There are 21820 characters and 4406 words in these images. 91 percent of the characters and 86 percent of the words are successfully cleaned up and binarized. A commercial OCR was applied to the binarized text when it consisted of fonts which were OCR recognizable. The recognition rate was 84 percent for the characters and 77 percent for the words.

  12. libprofit: Image creation from luminosity profiles

    NASA Astrophysics Data System (ADS)

    Robotham, A. S. G.; Taranu, D.; Tobar, R.

    2016-12-01

    libprofit is a C++ library for image creation based on different luminosity profiles. It offers fast and accurate two-dimensional integration for a useful number of profiles, including Sersic, Core-Sersic, broken-exponential, Ferrer, Moffat, empirical King, point-source and sky, with a simple mechanism for adding new profiles. libprofit provides a utility to read the model and profile parameters from the command-line and generate the corresponding image. It can output the resulting image as text values, a binary stream, or as a simple FITS file. It also provides a shared library exposing an API that can be used by any third-party application. R and Python interfaces are available: ProFit (ascl:1612.004) and PyProfit (ascl:1612.005).

  13. Block selective redaction for minimizing loss during de-identification of burned in text in irreversibly compressed JPEG medical images.

    PubMed

    Clunie, David A; Gebow, Dan

    2015-01-01

    Deidentification of medical images requires attention to both header information as well as the pixel data itself, in which burned-in text may be present. If the pixel data to be deidentified is stored in a compressed form, traditionally it is decompressed, identifying text is redacted, and if necessary, pixel data are recompressed. Decompression without recompression may result in images of excessive or intractable size. Recompression with an irreversible scheme is undesirable because it may cause additional loss in the diagnostically relevant regions of the images. The irreversible (lossy) JPEG compression scheme works on small blocks of the image independently, hence, redaction can selectively be confined only to those blocks containing identifying text, leaving all other blocks unchanged. An open source implementation of selective redaction and a demonstration of its applicability to multiframe color ultrasound images is described. The process can be applied either to standalone JPEG images or JPEG bit streams encapsulated in other formats, which in the case of medical images, is usually DICOM.

  14. CDC Vital Signs: Trucker Safety

    MedlinePlus

    ... to 84% in 2013). View larger image and text description Infographic View larger image and text description Top of Page What Can Be Done ... to take rest breaks. Prohibiting truck drivers from text messaging or using a handheld cell phone while ...

  15. How to integrate quantitative information into imaging reports for oncologic patients.

    PubMed

    Martí-Bonmatí, L; Ruiz-Martínez, E; Ten, A; Alberich-Bayarri, A

    2018-05-01

    Nowadays, the images and information generated in imaging tests, as well as the reports that are issued, are digital and represent a reliable source of data. Reports can be classified according to their content and to the type of information they include into three main types: organized (free text in natural language), predefined (with templates and guidelines elaborated with previously determined natural language like that used in BI-RADS and PI-RADS), or structured (with drop-down menus displaying questions with various possible answers that have been agreed on with the rest of the multidisciplinary team, which use standardized lexicons and are structured in the form of a database with data that can be traced and exploited with statistical tools and data mining). The structured report, compatible with Management of Radiology Report Templates (MRRT), makes it possible to incorporate quantitative information related with the digital analysis of the data from the acquired images to accurately and precisely describe the properties and behavior of tissues by means of radiomics (characteristics and parameters). In conclusion, structured digital information (images, text, measurements, radiomic features, and imaging biomarkers) should be integrated into computerized reports so that they can be indexed in large repositories. Radiologic databanks are fundamental for exploiting health information, phenotyping lesions and diseases, and extracting conclusions in personalized medicine. Copyright © 2018 SERAM. Publicado por Elsevier España, S.L.U. All rights reserved.

  16. Is the recall of verbal-spatial information from working memory affected by symptoms of ADHD?

    PubMed

    Caterino, Linda C; Verdi, Michael P

    2012-10-01

    OJECTIVE: The Kulhavy model for text learning using organized spatial displays proposes that learning will be increased when participants view visual images prior to related text. In contrast to previous studies, this study also included students who exhibited symptoms of ADHD. Participants were presented with either a map-text or text-map condition. The map-text condition led to a significantly higher performance than the text-map condition, overall. However, students who endorsed more symptoms of inattention and hyperactivity-impulsivity scored more poorly when asked to recall text facts, text features, and map features and were less able to correctly place map features on a reconstructed map than were students who endorsed fewer symptoms. The results of the study support the Kulhavy model for typical students; however, the benefit of viewing a display prior to text was not seen for students with ADHD symptoms, thus supporting previous studies that have demonstrated that ADHD appears to negatively affect operations that occur in working memory.

  17. Staying Connected on the Road: A Comparison of Different Types of Smart Phone Use in a Driving Simulator

    PubMed Central

    McNabb, Jaimie; Gray, Rob

    2016-01-01

    Previous research on smart phone use while driving has primarily focused on phone calls and texting. Drivers are now increasingly using their phone for other activities during driving, in particular social media, which have different cognitive demands. The present study compared the effects of four different smart phone tasks on car-following performance in a driving simulator. Phone tasks were chosen that vary across two factors: interaction medium (text vs image) and task pacing (self-paced vs experimenter-paced) and were as follows: Text messaging with the experimenter (text/other-paced), reading Facebook posts (text/self-paced), exchanging photos with the experimenter via Snapchat (image, experimenter -paced), and viewing updates on Instagram (image, experimenter -paced). Drivers also performed a driving only baseline. Brake reaction times (BRTs) were significantly greater in the text-based conditions (Mean = 1.16 s) as compared to both the image-based conditions (Mean = 0.92 s) and the baseline (0.88 s). There was no significant difference between BRTs in the image-based and baseline conditions and there was no significant effect of task-pacing. Similar results were obtained for Time Headway variability. These results are consistent with the picture superiority effect found in memory research and suggest that image-based interfaces could provide safer ways to “stay connected” while driving than text-based interfaces. PMID:26886099

  18. Staying Connected on the Road: A Comparison of Different Types of Smart Phone Use in a Driving Simulator.

    PubMed

    McNabb, Jaimie; Gray, Rob

    2016-01-01

    Previous research on smart phone use while driving has primarily focused on phone calls and texting. Drivers are now increasingly using their phone for other activities during driving, in particular social media, which have different cognitive demands. The present study compared the effects of four different smart phone tasks on car-following performance in a driving simulator. Phone tasks were chosen that vary across two factors: interaction medium (text vs image) and task pacing (self-paced vs experimenter-paced) and were as follows: Text messaging with the experimenter (text/other-paced), reading Facebook posts (text/self-paced), exchanging photos with the experimenter via Snapchat (image, experimenter-paced), and viewing updates on Instagram (image, experimenter-paced). Drivers also performed a driving only baseline. Brake reaction times (BRTs) were significantly greater in the text-based conditions (Mean = 1.16 s) as compared to both the image-based conditions (Mean = 0.92 s) and the baseline (0.88 s). There was no significant difference between BRTs in the image-based and baseline conditions and there was no significant effect of task-pacing. Similar results were obtained for Time Headway variability. These results are consistent with the picture superiority effect found in memory research and suggest that image-based interfaces could provide safer ways to "stay connected" while driving than text-based interfaces.

  19. Integrated editing system for Japanese text and image information "Linernote"

    NASA Astrophysics Data System (ADS)

    Tanaka, Kazuto

    Integrated Japanese text editing system "Linernote" developed by Toyo Industries Co. is explained. The system has been developed on the concept of electronic publishing. It is composed of personal computer NEC PC-9801 VX and other peripherals. Sentence, drawing and image data is inputted and edited under the integrated operating environment in the system and final text is printed out by laser printer. Handling efficiency of time consuming work such as pattern input or page make up has been improved by draft image data indication method on CRT. It is the latest DTP system equipped with three major functions, namly, typesetting for high quality text editing, easy drawing/tracing and high speed image processing.

  20. The impact of presentation style on the retention of online health information: a randomized-controlled experiment.

    PubMed

    Frisch, Anne-Linda; Camerini, Luca; Schulz, Peter J

    2013-01-01

    The Internet plays an increasingly important role in health education, providing laypeople with information about health-related topics that range from disease-specific contexts to general health promotion. Compared to traditional health education, the Internet allows the use of multimedia applications that offer promise to enhance individuals' health knowledge and literacy. This study aims at testing the effect of multimedia presentation of health information on learning. Relying on an experimental design, it investigates how retention of information differs for text-only presentation, image-only presentation, and multimedia (text and image) presentation of online health information. Two hundred and forty students were randomly assigned to four groups each exposed to a different website version. Three groups were exposed to the same information using text only, image only, or text and image presentation. A fourth group received unrelated information (control group). Retention was assessed by the means of a recognition test. To examine a possible interaction between website version and recognition test, half of the students received a recognition test in text form and half of them received a recognition test in imagery form. In line with assumptions from Dual Coding Theory, students exposed to the multimedia (text and image) presentation recognized significantly more information than students exposed to the text-only presentation. This did not hold for students exposed to the image-only presentation. The impact of presentation style on retention scores was moderated by the way retention was assessed for image-only presentation, but not for text-only or multimedia presentation. Possible explanations and implications for the design of online health education interventions are discussed.

  1. Computer-assisted liver graft steatosis assessment via learning-based texture analysis.

    PubMed

    Moccia, Sara; Mattos, Leonardo S; Patrini, Ilaria; Ruperti, Michela; Poté, Nicolas; Dondero, Federica; Cauchy, François; Sepulveda, Ailton; Soubrane, Olivier; De Momi, Elena; Diaspro, Alberto; Cesaretti, Manuela

    2018-05-23

    Fast and accurate graft hepatic steatosis (HS) assessment is of primary importance for lowering liver dysfunction risks after transplantation. Histopathological analysis of biopsied liver is the gold standard for assessing HS, despite being invasive and time consuming. Due to the short time availability between liver procurement and transplantation, surgeons perform HS assessment through clinical evaluation (medical history, blood tests) and liver texture visual analysis. Despite visual analysis being recognized as challenging in the clinical literature, few efforts have been invested to develop computer-assisted solutions for HS assessment. The objective of this paper is to investigate the automatic analysis of liver texture with machine learning algorithms to automate the HS assessment process and offer support for the surgeon decision process. Forty RGB images of forty different donors were analyzed. The images were captured with an RGB smartphone camera in the operating room (OR). Twenty images refer to livers that were accepted and 20 to discarded livers. Fifteen randomly selected liver patches were extracted from each image. Patch size was [Formula: see text]. This way, a balanced dataset of 600 patches was obtained. Intensity-based features (INT), histogram of local binary pattern ([Formula: see text]), and gray-level co-occurrence matrix ([Formula: see text]) were investigated. Blood-sample features (Blo) were included in the analysis, too. Supervised and semisupervised learning approaches were investigated for feature classification. The leave-one-patient-out cross-validation was performed to estimate the classification performance. With the best-performing feature set ([Formula: see text]) and semisupervised learning, the achieved classification sensitivity, specificity, and accuracy were 95, 81, and 88%, respectively. This research represents the first attempt to use machine learning and automatic texture analysis of RGB images from ubiquitous smartphone cameras for the task of graft HS assessment. The results suggest that is a promising strategy to develop a fully automatic solution to assist surgeons in HS assessment inside the OR.

  2. X-Ray Fluorescence Imaging of Ancient Artifacts

    NASA Astrophysics Data System (ADS)

    Thorne, Robert; Geil, Ethan; Hudson, Kathryn; Crowther, Charles

    2011-03-01

    Many archaeological artifacts feature inscribed and/or painted text or figures which, through erosion and aging, have become difficult or impossible to read with conventional methods. Often, however, the pigments in paints contain metallic elements, and traces may remain even after visible markings are gone. A promising non-destructive technique for revealing these remnants is X-ray fluorescence (XRF) imaging, in which a tightly focused beam of monochromatic synchrotron radiation is raster scanned across a sample. At each pixel, an energy-dispersive detector records a fluorescence spectrum, which is then analyzed to determine element concentrations. In this way, a map of various elements is made across a region of interest. We have succesfully XRF imaged ancient Greek, Roman, and Mayan artifacts, and in many cases, the element maps have revealed significant new information, including previously invisible painted lines and traces of iron from tools used to carve stone tablets. X-ray imaging can be used to determine an object's provenance, including the region where it was produced and whether it is authentic or a copy.

  3. Digital Data from the Great Sand Dunes and Poncha Springs Aeromagnetic Surveys, South-Central Colorado

    USGS Publications Warehouse

    Drenth, B.J.; Grauch, V.J.S.; Bankey, Viki; New Sense Geophysics, Ltd.

    2009-01-01

    This report contains digital data, image files, and text files describing data formats and survey procedures for two high-resolution aeromagnetic surveys in south-central Colorado: one in the eastern San Luis Valley, Alamosa and Saguache Counties, and the other in the southern Upper Arkansas Valley, Chaffee County. In the San Luis Valley, the Great Sand Dunes survey covers a large part of Great Sand Dunes National Park and Preserve and extends south along the mountain front to the foot of Mount Blanca. In the Upper Arkansas Valley, the Poncha Springs survey covers the town of Poncha Springs and vicinity. The digital files include grids, images, and flight-line data. Several derivative products from these data are also presented as grids and images, including two grids of reduced-to-pole aeromagnetic data and data continued to a reference surface. Images are presented in various formats and are intended to be used as input to geographic information systems, standard graphics software, or map plotting packages.

  4. A database for semantic, grammatical, and frequency properties of the first words acquired by Italian children.

    PubMed

    Rinaldi, Pasquale; Barca, Laura; Burani, Cristina

    2004-08-01

    The CFVlexvar.xls database includes imageability, frequency, and grammatical properties of the first words acquired by Italian children. For each of 519 words that are known by children 18-30 months of age (taken from Caselli & Casadio's, 1995, Italian version of the MacArthur Communicative Development Inventory), new values of imageability are provided and values for age of acquisition, child written frequency, and adult written and spoken frequency are included. In this article, correlations among the variables are discussed and the words are grouped into grammatical categories. The results show that words acquired early have imageable referents, are frequently used in the texts read and written by elementary school children, and are frequent in adult written and spoken language. Nouns are acquired earlier and are more imageable than both verbs and adjectives. The composition in grammatical categories of the child's first vocabulary reflects the composition of adult vocabulary. The full set of these norms can be downloaded from www.psychonomic.org/archive/.

  5. An integrated content and metadata based retrieval system for art.

    PubMed

    Lewis, Paul H; Martinez, Kirk; Abas, Fazly Salleh; Fauzi, Mohammad Faizal Ahmad; Chan, Stephen C Y; Addis, Matthew J; Boniface, Mike J; Grimwood, Paul; Stevenson, Alison; Lahanier, Christian; Stevenson, James

    2004-03-01

    A new approach to image retrieval is presented in the domain of museum and gallery image collections. Specialist algorithms, developed to address specific retrieval tasks, are combined with more conventional content and metadata retrieval approaches, and implemented within a distributed architecture to provide cross-collection searching and navigation in a seamless way. External systems can access the different collections using interoperability protocols and open standards, which were extended to accommodate content based as well as text based retrieval paradigms. After a brief overview of the complete system, we describe the novel design and evaluation of some of the specialist image analysis algorithms including a method for image retrieval based on sub-image queries, retrievals based on very low quality images and retrieval using canvas crack patterns. We show how effective retrieval results can be achieved by real end-users consisting of major museums and galleries, accessing the distributed but integrated digital collections.

  6. Spread spectrum image steganography.

    PubMed

    Marvel, L M; Boncelet, C R; Retter, C T

    1999-01-01

    In this paper, we present a new method of digital steganography, entitled spread spectrum image steganography (SSIS). Steganography, which means "covered writing" in Greek, is the science of communicating in a hidden manner. Following a discussion of steganographic communication theory and review of existing techniques, the new method, SSIS, is introduced. This system hides and recovers a message of substantial length within digital imagery while maintaining the original image size and dynamic range. The hidden message can be recovered using appropriate keys without any knowledge of the original image. Image restoration, error-control coding, and techniques similar to spread spectrum are described, and the performance of the system is illustrated. A message embedded by this method can be in the form of text, imagery, or any other digital signal. Applications for such a data-hiding scheme include in-band captioning, covert communication, image tamperproofing, authentication, embedded control, and revision tracking.

  7. Single exposure to disclaimers on airbrushed thin ideal images increases negative thought accessibility.

    PubMed

    Selimbegović, Leila; Chatard, Armand

    2015-01-01

    Disclaimers on airbrushed thin ideal images can attract attention to the thin ideal standard promoted by the advertisements, which can be damaging rather than helpful. In this study, 48 female college students were exposed to a thin ideal image including a disclaimer, a neutral sentence, or nothing. Two weeks and two months after this, they were again exposed to the same image but with no accompanying text in any of the conditions. Negative thought accessibility was assessed three times, after each exposure to the thin-ideal image, using reaction time measures. Participants randomly assigned to the disclaimer condition systematically showed greater accessibility of negative thoughts than those in the other two conditions, irrespective of the time of measurement. These results suggest that disclaimers on airbrushed images may have some counter-productive effects by accentuating the problems that they precisely aim to address. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. A novel attack method about double-random-phase-encoding-based image hiding method

    NASA Astrophysics Data System (ADS)

    Xu, Hongsheng; Xiao, Zhijun; Zhu, Xianchen

    2018-03-01

    By using optical image processing techniques, a novel text encryption and hiding method applied by double-random phase-encoding technique is proposed in the paper. The first step is that the secret message is transformed into a 2-dimension array. The higher bits of the elements in the array are used to fill with the bit stream of the secret text, while the lower bits are stored specific values. Then, the transformed array is encoded by double random phase encoding technique. Last, the encoded array is embedded on a public host image to obtain the image embedded with hidden text. The performance of the proposed technique is tested via analytical modeling and test data stream. Experimental results show that the secret text can be recovered either accurately or almost accurately, while maintaining the quality of the host image embedded with hidden data by properly selecting the method of transforming the secret text into an array and the superimposition coefficient.

  9. Dealing with extreme data diversity: extraction and fusion from the growing types of document formats

    NASA Astrophysics Data System (ADS)

    David, Peter; Hansen, Nichole; Nolan, James J.; Alcocer, Pedro

    2015-05-01

    The growth in text data available online is accompanied by a growth in the diversity of available documents. Corpora with extreme heterogeneity in terms of file formats, document organization, page layout, text style, and content are common. The absence of meaningful metadata describing the structure of online and open-source data leads to text extraction results that contain no information about document structure and are cluttered with page headers and footers, web navigation controls, advertisements, and other items that are typically considered noise. We describe an approach to document structure and metadata recovery that uses visual analysis of documents to infer the communicative intent of the author. Our algorithm identifies the components of documents such as titles, headings, and body content, based on their appearance. Because it operates on an image of a document, our technique can be applied to any type of document, including scanned images. Our approach to document structure recovery considers a finer-grained set of component types than prior approaches. In this initial work, we show that a machine learning approach to document structure recovery using a feature set based on the geometry and appearance of images of documents achieves a 60% greater F1- score than a baseline random classifier.

  10. Reading visually embodied meaning from the brain: Visually grounded computational models decode visual-object mental imagery induced by written text.

    PubMed

    Anderson, Andrew James; Bruni, Elia; Lopopolo, Alessandro; Poesio, Massimo; Baroni, Marco

    2015-10-15

    Embodiment theory predicts that mental imagery of object words recruits neural circuits involved in object perception. The degree of visual imagery present in routine thought and how it is encoded in the brain is largely unknown. We test whether fMRI activity patterns elicited by participants reading objects' names include embodied visual-object representations, and whether we can decode the representations using novel computational image-based semantic models. We first apply the image models in conjunction with text-based semantic models to test predictions of visual-specificity of semantic representations in different brain regions. Representational similarity analysis confirms that fMRI structure within ventral-temporal and lateral-occipital regions correlates most strongly with the image models and conversely text models correlate better with posterior-parietal/lateral-temporal/inferior-frontal regions. We use an unsupervised decoding algorithm that exploits commonalities in representational similarity structure found within both image model and brain data sets to classify embodied visual representations with high accuracy (8/10) and then extend it to exploit model combinations to robustly decode different brain regions in parallel. By capturing latent visual-semantic structure our models provide a route into analyzing neural representations derived from past perceptual experience rather than stimulus-driven brain activity. Our results also verify the benefit of combining multimodal data to model human-like semantic representations. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. A powerful graphical pulse sequence programming tool for magnetic resonance imaging.

    PubMed

    Jie, Shen; Ying, Liu; Jianqi, Li; Gengying, Li

    2005-12-01

    A powerful graphical pulse sequence programming tool has been designed for creating magnetic resonance imaging (MRI) applications. It allows rapid development of pulse sequences in graphical mode (allowing for the visualization of sequences), and consists of three modules which include a graphical sequence editor, a parameter management module and a sequence compiler. Its key features are ease to use, flexibility and hardware independence. When graphic elements are combined with a certain text expressions, the graphical pulse sequence programming is as flexible as text-based programming tool. In addition, a hardware-independent design is implemented by using the strategy of two step compilations. To demonstrate the flexibility and the capability of this graphical sequence programming tool, a multi-slice fast spin echo experiment is performed on our home-made 0.3 T permanent magnet MRI system.

  12. High-accuracy drilling with an image guided light weight robot: autonomous versus intuitive feed control.

    PubMed

    Tauscher, Sebastian; Fuchs, Alexander; Baier, Fabian; Kahrs, Lüder A; Ortmaier, Tobias

    2017-10-01

    Assistance of robotic systems in the operating room promises higher accuracy and, hence, demanding surgical interventions become realisable (e.g. the direct cochlear access). Additionally, an intuitive user interface is crucial for the use of robots in surgery. Torque sensors in the joints can be employed for intuitive interaction concepts. Regarding the accuracy, they lead to a lower structural stiffness and, thus, to an additional error source. The aim of this contribution is to examine, if an accuracy needed for demanding interventions can be achieved by such a system or not. Feasible accuracy results of the robot-assisted process depend on each work-flow step. This work focuses on the determination of the tool coordinate frame. A method for drill axis definition is implemented and analysed. Furthermore, a concept of admittance feed control is developed. This allows the user to control feeding along the planned path by applying a force to the robots structure. The accuracy is researched by drilling experiments with a PMMA phantom and artificial bone blocks. The described drill axis estimation process results in a high angular repeatability ([Formula: see text]). In the first set of drilling results, an accuracy of [Formula: see text] at entrance and [Formula: see text] at target point excluding imaging was achieved. With admittance feed control an accuracy of [Formula: see text] at target point was realised. In a third set twelve holes were drilled in artificial temporal bone phantoms including imaging. In this set-up an error of [Formula: see text] and [Formula: see text] was achieved. The results of conducted experiments show that accuracy requirements for demanding procedures such as the direct cochlear access can be fulfilled with compliant systems. Furthermore, it was shown that with the presented admittance feed control an accuracy of less then [Formula: see text] is achievable.

  13. Image query and indexing for digital x rays

    NASA Astrophysics Data System (ADS)

    Long, L. Rodney; Thoma, George R.

    1998-12-01

    The web-based medical information retrieval system (WebMIRS) allows interned access to databases containing 17,000 digitized x-ray spine images and associated text data from National Health and Nutrition Examination Surveys (NHANES). WebMIRS allows SQL query of the text, and viewing of the returned text records and images using a standard browser. We are now working (1) to determine utility of data directly derived from the images in our databases, and (2) to investigate the feasibility of computer-assisted or automated indexing of the images to support image retrieval of images of interest to biomedical researchers in the field of osteoarthritis. To build an initial database based on image data, we are manually segmenting a subset of the vertebrae, using techniques from vertebral morphometry. From this, we will derive and add to the database vertebral features. This image-derived data will enhance the user's data access capability by enabling the creation of combined SQL/image-content queries.

  14. Systematic review, critical appraisal, and analysis of the quality of economic evaluations in stroke imaging.

    PubMed

    Burton, Kirsteen R; Perlis, Nathan; Aviv, Richard I; Moody, Alan R; Kapral, Moira K; Krahn, Murray D; Laupacis, Andreas

    2014-03-01

    This study reviews the quality of economic evaluations of imaging after acute stroke and identifies areas for improvement. We performed full-text searches of electronic databases that included Medline, Econlit, the National Health Service Economic Evaluation Database, and the Tufts Cost Effectiveness Analysis Registry through July 2012. Search strategy terms included the following: stroke*; cost*; or cost-benefit analysis*; and imag*. Inclusion criteria were empirical studies published in any language that reported the results of economic evaluations of imaging interventions for patients with stroke symptoms. Study quality was assessed by a commonly used checklist (with a score range of 0% to 100%). Of 568 unique potential articles identified, 5 were included in the review. Four of 5 articles were explicit in their analysis perspectives, which included healthcare system payers, hospitals, and stroke services. Two studies reported results during a 5-year time horizon, and 3 studies reported lifetime results. All included the modified Rankin Scale score as an outcome measure. The median quality score was 84.4% (range=71.9%-93.5%). Most studies did not consider the possibility that patients could not tolerate contrast media or could incur contrast-induced nephropathy. Three studies compared perfusion computed tomography with unenhanced computed tomography but assumed that outcomes guided by the results of perfusion computed tomography were equivalent to outcomes guided by the results of magnetic resonance imaging or noncontrast computed tomography. Economic evaluations of imaging modalities after acute ischemic stroke were generally of high methodological quality. However, important radiology-specific clinical components were missing from all of these analyses.

  15. Cardiovascular magnetic resonance physics for clinicians: part I.

    PubMed

    Ridgway, John P

    2010-11-30

    There are many excellent specialised texts and articles that describe the physical principles of cardiovascular magnetic resonance (CMR) techniques. There are also many texts written with the clinician in mind that provide an understandable, more general introduction to the basic physical principles of magnetic resonance (MR) techniques and applications. There are however very few texts or articles that attempt to provide a basic MR physics introduction that is tailored for clinicians using CMR in their daily practice. This is the first of two reviews that are intended to cover the essential aspects of CMR physics in a way that is understandable and relevant to this group. It begins by explaining the basic physical principles of MR, including a description of the main components of an MR imaging system and the three types of magnetic field that they generate. The origin and method of production of the MR signal in biological systems are explained, focusing in particular on the two tissue magnetisation relaxation properties (T1 and T2) that give rise to signal differences from tissues, showing how they can be exploited to generate image contrast for tissue characterisation. The method most commonly used to localise and encode MR signal echoes to form a cross sectional image is described, introducing the concept of k-space and showing how the MR signal data stored within it relates to properties within the reconstructed image. Before describing the CMR acquisition methods in detail, the basic spin echo and gradient pulse sequences are introduced, identifying the key parameters that influence image contrast, including appearances in the presence of flowing blood, resolution and image acquisition time. The main derivatives of these two pulse sequences used for cardiac imaging are then described in more detail. Two of the key requirements for CMR are the need for data acquisition first to be to be synchronised with the subject's ECG and to be fast enough for the subject to be able to hold their breath. Methods of ECG synchronisation using both triggering and retrospective gating approaches, and accelerated data acquisition using turbo or fast spin echo and gradient echo pulse sequences are therefore outlined in some detail. It is shown how double inversion black blood preparation combined with turbo or fast spin echo pulse sequences acquisition is used to achieve high quality anatomical imaging. For functional cardiac imaging using cine gradient echo pulse sequences two derivatives of the gradient echo pulse sequence; spoiled gradient echo and balanced steady state free precession (bSSFP) are compared. In each case key relevant imaging parameters and vendor-specific terms are defined and explained.

  16. Cardiovascular magnetic resonance physics for clinicians: part I

    PubMed Central

    2010-01-01

    There are many excellent specialised texts and articles that describe the physical principles of cardiovascular magnetic resonance (CMR) techniques. There are also many texts written with the clinician in mind that provide an understandable, more general introduction to the basic physical principles of magnetic resonance (MR) techniques and applications. There are however very few texts or articles that attempt to provide a basic MR physics introduction that is tailored for clinicians using CMR in their daily practice. This is the first of two reviews that are intended to cover the essential aspects of CMR physics in a way that is understandable and relevant to this group. It begins by explaining the basic physical principles of MR, including a description of the main components of an MR imaging system and the three types of magnetic field that they generate. The origin and method of production of the MR signal in biological systems are explained, focusing in particular on the two tissue magnetisation relaxation properties (T1 and T2) that give rise to signal differences from tissues, showing how they can be exploited to generate image contrast for tissue characterisation. The method most commonly used to localise and encode MR signal echoes to form a cross sectional image is described, introducing the concept of k-space and showing how the MR signal data stored within it relates to properties within the reconstructed image. Before describing the CMR acquisition methods in detail, the basic spin echo and gradient pulse sequences are introduced, identifying the key parameters that influence image contrast, including appearances in the presence of flowing blood, resolution and image acquisition time. The main derivatives of these two pulse sequences used for cardiac imaging are then described in more detail. Two of the key requirements for CMR are the need for data acquisition first to be to be synchronised with the subject's ECG and to be fast enough for the subject to be able to hold their breath. Methods of ECG synchronisation using both triggering and retrospective gating approaches, and accelerated data acquisition using turbo or fast spin echo and gradient echo pulse sequences are therefore outlined in some detail. It is shown how double inversion black blood preparation combined with turbo or fast spin echo pulse sequences acquisition is used to achieve high quality anatomical imaging. For functional cardiac imaging using cine gradient echo pulse sequences two derivatives of the gradient echo pulse sequence; spoiled gradient echo and balanced steady state free precession (bSSFP) are compared. In each case key relevant imaging parameters and vendor-specific terms are defined and explained. PMID:21118531

  17. Visualization and recommendation of large image collections toward effective sensemaking

    NASA Astrophysics Data System (ADS)

    Gu, Yi; Wang, Chaoli; Nemiroff, Robert; Kao, David; Parra, Denis

    2016-03-01

    In our daily lives, images are among the most commonly found data which we need to handle. We present iGraph, a graph-based approach for visual analytics of large image collections and their associated text information. Given such a collection, we compute the similarity between images, the distance between texts, and the connection between image and text to construct iGraph, a compound graph representation which encodes the underlying relationships among these images and texts. To enable effective visual navigation and comprehension of iGraph with tens of thousands of nodes and hundreds of millions of edges, we present a progressive solution that offers collection overview, node comparison, and visual recommendation. Our solution not only allows users to explore the entire collection with representative images and keywords but also supports detailed comparison for understanding and intuitive guidance for navigation. The visual exploration of iGraph is further enhanced with the implementation of bubble sets to highlight group memberships of nodes, suggestion of abnormal keywords or time periods based on text outlier detection, and comparison of four different recommendation solutions. For performance speedup, multiple graphics processing units and central processing units are utilized for processing and visualization in parallel. We experiment with two image collections and leverage a cluster driving a display wall of nearly 50 million pixels. We show the effectiveness of our approach by demonstrating experimental results and conducting a user study.

  18. Southeast Asian palm leaf manuscript images: a review of handwritten text line segmentation methods and new challenges

    NASA Astrophysics Data System (ADS)

    Kesiman, Made Windu Antara; Valy, Dona; Burie, Jean-Christophe; Paulus, Erick; Sunarya, I. Made Gede; Hadi, Setiawan; Sok, Kim Heng; Ogier, Jean-Marc

    2017-01-01

    Due to their specific characteristics, palm leaf manuscripts provide new challenges for text line segmentation tasks in document analysis. We investigated the performance of six text line segmentation methods by conducting comparative experimental studies for the collection of palm leaf manuscript images. The image corpus used in this study comes from the sample images of palm leaf manuscripts of three different Southeast Asian scripts: Balinese script from Bali and Sundanese script from West Java, both from Indonesia, and Khmer script from Cambodia. For the experiments, four text line segmentation methods that work on binary images are tested: the adaptive partial projection line segmentation approach, the A* path planning approach, the shredding method, and our proposed energy function for shredding method. Two other methods that can be directly applied on grayscale images are also investigated: the adaptive local connectivity map method and the seam carving-based method. The evaluation criteria and tool provided by ICDAR2013 Handwriting Segmentation Contest were used in this experiment.

  19. What Images Reveal: a Comparative Study of Science Images between Australian and Taiwanese Junior High School Textbooks

    NASA Astrophysics Data System (ADS)

    Ge, Yun-Ping; Unsworth, Len; Wang, Kuo-Hua; Chang, Huey-Por

    2017-07-01

    From a social semiotic perspective, image designs in science textbooks are inevitably influenced by the sociocultural context in which the books are produced. The learning environments of Australia and Taiwan vary greatly. Drawing on social semiotics and cognitive science, this study compares classificational images in Australian and Taiwanese junior high school science textbooks. Classificational images are important kinds of images, which can represent taxonomic relations among objects as reported by Kress and van Leeuwen (Reading images: the grammar of visual design, 2006). An analysis of the images from sample chapters in Australian and Taiwanese high school science textbooks showed that the majority of the Taiwanese images are covert taxonomies, which represent hierarchical relations implicitly. In contrast, Australian classificational images included diversified designs, but particularly types with a tree structure which depicted overt taxonomies, explicitly representing hierarchical super-ordinate and subordinate relations. Many of the Taiwanese images are reminiscent of the specimen images in eighteenth century science texts representing "what truly is", while more Australian images emphasize structural objectivity. Moreover, Australian images support cognitive functions which facilitate reading comprehension. The relationships between image designs and learning environments are discussed and implications for textbook research and design are addressed.

  20. BOOK REVIEW: Image-Guided IMRT

    NASA Astrophysics Data System (ADS)

    Mayles, P.

    2006-12-01

    This book provides comprehensive coverage of the subject of intensity modulated radiotherapy and the associated imaging. Most of the names associated with advanced radiotherapy can be found among the 80 authors and the book is therefore an authoritative reference text. The early chapters deal with the basic principles and include an interesting comparison between views of quality assurance for IMRT from Europe and North America. It is refreshing to see that the advice given has moved on from the concept of individual patient based quality control to more generic testing of the delivery system. However, the point is made that the whole process including the data transfer needs to be quality assured and the need for thorough commissioning of the process is emphasised. The `tricks' needed to achieve a dose based IMRT plan are well covered by the group at Ghent and there is an interesting summary of biological aspects of treatment planning for IMRT by Andrzej Niemierko. The middle section of the book deals with advanced imaging aspects of both treatment planning and delivery. The contributions of PET and MR imaging are well covered and there is a rather rambling section on molecular imaging. Image guidance in radiotherapy treatment is addressed including the concept of adaptive radiotherapy. The treatment aspects could perhaps have merited some more coverage, but there is a very thorough discussion of 4D techniques. The final section of the book considers each site of the body in turn. This will be found useful by those wishing to embark on IMRT in a new area, although some of the sections are more comprehensive than others. The book contains a wealth of interesting and thought provoking articles giving details as well as broad principles, and would be a useful addition to every departmental library. The editors have done a good job of ensuring that the different chapters are complementary, and of encouraging a systematic approach to the descriptions of IMRT in different anatomical sites, each of which ends with a look ahead to the future. It is something of a challenge to keep a book devoted to a rapidly developing technique up to date. Inspection of the references suggests that most of the text was completed in 2004, but the choice of world renowned authors means that the text very much represents the state of the art. The book is well presented with many colour images and justifies its £110 price tag. However, there are some signs of it having been produced within a short time scale, such as an inadequate index which cannot be relied on to lead the reader to all, or even the most relevant, discussion on a particular topic. This book should make a significant contribution to widening the use of this important advance in radiation therapy techniques.

  1. Technological Convergence: A Brief Review of Some of the Developments in the Integrated Storage and Retrieval of Text, Data, Sound and Image.

    ERIC Educational Resources Information Center

    Forrest, Charles

    1988-01-01

    Reviews technological developments centered around microcomputers that have led to the design of integrated workstations. Topics discussed include methods of information storage, information retrieval, telecommunications networks, word processing, data management, graphics, interactive video, sound, interfaces, artificial intelligence, hypermedia,…

  2. 76 FR 5216 - Notice of Availability of Final Supplemental Environmental Impact Statement for the Nichols Ranch...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-28

    ... Uranium Recovery Project, located in the Pumpkin Buttes Uranium Mining District within the Powder River.... Alternatives that were considered, but were eliminated from detailed analysis, include conventional mining and... an Agencywide Documents and Management System (ADAMS), which provides text and image files of the NRC...

  3. 76 FR 53500 - Notice of the Nuclear Regulatory Commission Issuance of Materials License SUA-1598 and Record of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-26

    ... (ADAMS), which provides text and image files of the NRC's public documents in the NRC Library at http... considered, but eliminated from detailed analysis, include conventional uranium mining and milling, conventional mining and heap leach processing, alternate lixiviants, and alternative wastewater disposal...

  4. Creating a New Definition of Library Cooperation: Past, Present, and Future Models.

    ERIC Educational Resources Information Center

    Lenzini, Rebecca T.; Shaw, Ward

    1991-01-01

    Describes the creation and purpose of the Colorado Alliance of Research Libraries (CARL), the subsequent development of CARL Systems, and its current research projects. Topics discussed include online catalogs; UnCover, a journal article database; full text data; document delivery; visual images in computer systems; networks; and implications for…

  5. A Pointing Out and Naming Paradigm to Support Radiological Teaching and Case-Oriented Learning.

    ERIC Educational Resources Information Center

    Van Cleynenbreugel, J.; And Others

    1994-01-01

    The use of computer programs for authoring and presenting case materials in professional instruction in radiology is discussed. A workstation-based multimedia program for presenting and annotating images accompanied by both voice and text is described. Comments are also included on validity results and student response. (MSE)

  6. An Optical Disk-Based Information Retrieval System.

    ERIC Educational Resources Information Center

    Bender, Avi

    1988-01-01

    Discusses a pilot project by the Nuclear Regulatory Commission to apply optical disk technology to the storage and retrieval of documents related to its high level waste management program. Components and features of the microcomputer-based system which provides full-text and image access to documents are described. A sample search is included.…

  7. Communication and Reception in Teaching: The Age of Image "versus" the "Weight" of Words

    ERIC Educational Resources Information Center

    Bradea, Adela

    2015-01-01

    Contemporary culture is mainly a culture of image. We get our information seeing. Examination of images is free, while reading is impelled by the necessity of browsing the whole text. The image seems more appropriate than the text when trying to communicate easy and quickly. The speech calls for articulated language, expressed through a symbolic…

  8. 2D bifurcations and Newtonian properties of memristive Chua's circuits

    NASA Astrophysics Data System (ADS)

    Marszalek, W.; Podhaisky, H.

    2016-01-01

    Two interesting properties of Chua's circuits are presented. First, two-parameter bifurcation diagrams of Chua's oscillatory circuits with memristors are presented. To obtain various 2D bifurcation images a substantial numerical effort, possibly with parallel computations, is needed. The numerical algorithm is described first and its numerical code for 2D bifurcation image creation is available for free downloading. Several color 2D images and the corresponding 1D greyscale bifurcation diagrams are included. Secondly, Chua's circuits are linked to Newton's law φ ''= F(t,φ,φ')/m with φ=\\text{flux} , constant m > 0, and the force term F(t,φ,φ') containing memory terms. Finally, the jounce scalar equations for Chua's circuits are also discussed.

  9. SWT voting-based color reduction for text detection in natural scene images

    NASA Astrophysics Data System (ADS)

    Ikica, Andrej; Peer, Peter

    2013-12-01

    In this article, we propose a novel stroke width transform (SWT) voting-based color reduction method for detecting text in natural scene images. Unlike other text detection approaches that mostly rely on either text structure or color, the proposed method combines both by supervising text-oriented color reduction process with additional SWT information. SWT pixels mapped to color space vote in favor of the color they correspond to. Colors receiving high SWT vote most likely belong to text areas and are blocked from being mean-shifted away. Literature does not explicitly address SWT search direction issue; thus, we propose an adaptive sub-block method for determining correct SWT direction. Both SWT voting-based color reduction and SWT direction determination methods are evaluated on binary (text/non-text) images obtained from a challenging Computer Vision Lab optical character recognition database. SWT voting-based color reduction method outperforms the state-of-the-art text-oriented color reduction approach.

  10. Digital atlas of Oklahoma

    USGS Publications Warehouse

    Rea, A.H.; Becker, C.J.

    1997-01-01

    This compact disc contains 25 digital map data sets covering the State of Oklahoma that may be of interest to the general public, private industry, schools, and government agencies. Fourteen data sets are statewide. These data sets include: administrative boundaries; 104th U.S. Congressional district boundaries; county boundaries; latitudinal lines; longitudinal lines; geographic names; indexes of U.S. Geological Survey 1:100,000, and 1:250,000-scale topographic quadrangles; a shaded-relief image; Oklahoma State House of Representatives district boundaries; Oklahoma State Senate district boundaries; locations of U.S. Geological Survey stream gages; watershed boundaries and hydrologic cataloging unit numbers; and locations of weather stations. Eleven data sets are divided by county and are located in 77 county subdirectories. These data sets include: census block group boundaries with selected demographic data; city and major highways text; geographic names; land surface elevation contours; elevation points; an index of U.S. Geological Survey 1:24,000-scale topographic quadrangles; roads, streets and address ranges; highway text; school district boundaries; streams, river and lakes; and the public land survey system. All data sets are provided in a readily accessible format. Most data sets are provided in Digital Line Graph (DLG) format. The attributes for many of the DLG files are stored in related dBASE(R)-format files and may be joined to the data set polygon attribute or arc attribute tables using dBASE(R)-compatible software. (Any use of trade names in this publication is for descriptive purposes only and does not imply endorsement by the U.S. Government.) Point attribute tables are provided in dBASE(R) format only, and include the X and Y map coordinates of each point. Annotation (text plotted in map coordinates) are provided in AutoCAD Drawing Exchange format (DXF) files. The shaded-relief image is provided in TIFF format. All data sets except the shaded-relief image also are provided in ARC/INFO export-file format.

  11. Content-based image retrieval with ontological ranking

    NASA Astrophysics Data System (ADS)

    Tsai, Shen-Fu; Tsai, Min-Hsuan; Huang, Thomas S.

    2010-02-01

    Images are a much more powerful medium of expression than text, as the adage says: "One picture is worth a thousand words." It is because compared with text consisting of an array of words, an image has more degrees of freedom and therefore a more complicated structure. However, the less limited structure of images presents researchers in the computer vision community a tough task of teaching machines to understand and organize images, especially when a limit number of learning examples and background knowledge are given. The advance of internet and web technology in the past decade has changed the way human gain knowledge. People, hence, can exchange knowledge with others by discussing and contributing information on the web. As a result, the web pages in the internet have become a living and growing source of information. One is therefore tempted to wonder whether machines can learn from the web knowledge base as well. Indeed, it is possible to make computer learn from the internet and provide human with more meaningful knowledge. In this work, we explore this novel possibility on image understanding applied to semantic image search. We exploit web resources to obtain links from images to keywords and a semantic ontology constituting human's general knowledge. The former maps visual content to related text in contrast to the traditional way of associating images with surrounding text; the latter provides relations between concepts for machines to understand to what extent and in what sense an image is close to the image search query. With the aid of these two tools, the resulting image search system is thus content-based and moreover, organized. The returned images are ranked and organized such that semantically similar images are grouped together and given a rank based on the semantic closeness to the input query. The novelty of the system is twofold: first, images are retrieved not only based on text cues but their actual contents as well; second, the grouping is different from pure visual similarity clustering. More specifically, the inferred concepts of each image in the group are examined in the context of a huge concept ontology to determine their true relations with what people have in mind when doing image search.

  12. Effects of image-based and text-based active learning exercises on student examination performance in a musculoskeletal anatomy course.

    PubMed

    Gross, M Melissa; Wright, Mary C; Anderson, Olivia S

    2017-09-01

    Research on the benefits of visual learning has relied primarily on lecture-based pedagogy, but the potential benefits of combining active learning strategies with visual and verbal materials on learning anatomy has not yet been explored. In this study, the differential effects of text-based and image-based active learning exercises on examination performance were investigated in a functional anatomy course. Each class session was punctuated with an average of 12 text-based and image-based active learning exercises. Participation data from 231 students were compared with their examination performance on 262 questions associated with the in-class exercises. Students also rated the helpfulness and difficulty of the in-class exercises on a survey. Participation in the active learning exercises was positively correlated with examination performance (r = 0.63, P < 0.001). When controlling for other key demographics (gender, underrepresented minority status) and prior grade point average, participation in the image-based exercises was significantly correlated with performance on examination questions associated with image-based exercises (P < 0.001) and text-based exercises (P < 0.01), while participation in text-based exercises was not. Additionally, students reported that the active learning exercises were helpful for seeing images of key ideas (94%) and clarifying key course concepts (80%), and that the image-based exercises were significantly less demanding, less hard and required less effort than text-based exercises (P < 0.05). The findings confirm the positive effect of using images and active learning strategies on student learning, and suggest that integrating them may be especially beneficial for learning anatomy. Anat Sci Educ 10: 444-455. © 2017 American Association of Anatomists. © 2017 American Association of Anatomists.

  13. Imaging of plantar fascia disorders: findings on plain radiography, ultrasound and magnetic resonance imaging.

    PubMed

    Draghi, Ferdinando; Gitto, Salvatore; Bortolotto, Chandra; Draghi, Anna Guja; Ori Belometti, Gioia

    2017-02-01

    Plantar fascia (PF) disorders commonly cause heel pain and disability in the general population. Imaging is often required to confirm diagnosis. This review article aims to provide simple and systematic guidelines for imaging assessment of PF disease, focussing on key findings detectable on plain radiography, ultrasound and magnetic resonance imaging (MRI). Sonographic characteristics of plantar fasciitis include PF thickening, loss of fibrillar structure, perifascial collections, calcifications and hyperaemia on Doppler imaging. Thickening and signal changes in the PF as well as oedema of adjacent soft tissues and bone marrow can be assessed on MRI. Radiographic findings of plantar fasciitis include PF thickening, cortical irregularities and abnormalities in the fat pad located deep below the PF. Plantar fibromatosis appears as well-demarcated, nodular thickenings that are iso-hypoechoic on ultrasound and show low-signal intensity on MRI. PF tears present with partial or complete fibre interruption on both ultrasound and MRI. Imaging description of further PF disorders, including xanthoma, diabetic fascial disease, foreign-body reactions and plantar infections, is detailed in the main text. Ultrasound and MRI should be considered as first- and second-line modalities for assessment of PF disorders, respectively. Indirect findings of PF disease can be ruled out on plain radiography. Teaching Points • PF disorders commonly cause heel pain and disability in the general population.• Imaging is often required to confirm diagnosis or reveal concomitant injuries.• Ultrasound and MRI respectively represent the first- and second-line modalities for diagnosis.• Indirect findings of PF disease can be ruled out on plain radiography.

  14. Graphic imagery is not sufficient for increased attention to cigarette warnings: the role of text captions.

    PubMed

    Brown, Kyle G; Reidy, John G; Weighall, Anna R; Arden, Madelynne A

    2013-04-01

    The present study aims to assess the extent to which attention to UK cigarette warnings is attributable to the graphic nature of the content. A visual dot probe task was utilised, with the warnings serving as critical stimuli that were manipulated for the presence of graphic versus neutral image content, and the accompanying text caption. This mixed design yielded image content (graphic versus neutrally-matched images) and presence (versus absence) of text caption as within subjects variables and smoking status as a between-participants variable. The experiment took place within the laboratories of a UK university. Eighty-six psychology undergraduates (51% smokers, 69% female), predominantly of Caucasian ethnicity took part. Reaction times towards probes replacing graphic images relative to probes replacing neutral images were utilised to create an index of attentional bias. Bias scores (M = 10.20 ± 2.56) highlighted that the graphic image content of the warnings elicited attentional biases (relative to neutral images) for smokers. This only occurred in the presence of an accompanying text caption [t (43) = 3.950, P < 0.001] as opposed to when no caption was present [t (43) = 0.029, P = 0.977]. Non-smokers showed no biases in both instances. Graphic imagery on cigarette packets increases attentional capture, but only when accompanied by a text message about health risks. © 2012 The Authors, Addiction © 2012 Society for the Study of Addiction.

  15. Using compressed images in multimedia education

    NASA Astrophysics Data System (ADS)

    Guy, William L.; Hefner, Lance V.

    1996-04-01

    The classic radiologic teaching file consists of hundreds, if not thousands, of films of various ages, housed in paper jackets with brief descriptions written on the jackets. The development of a good teaching file has been both time consuming and voluminous. Also, any radiograph to be copied was unavailable during the reproduction interval, inconveniencing other medical professionals needing to view the images at that time. These factors hinder motivation to copy films of interest. If a busy radiologist already has an adequate example of a radiological manifestation, it is unlikely that he or she will exert the effort to make a copy of another similar image even if a better example comes along. Digitized radiographs stored on CD-ROM offer marked improvement over the copied film teaching files. Our institution has several laser digitizers which are used to rapidly scan radiographs and produce high quality digital images which can then be converted into standard microcomputer (IBM, Mac, etc.) image format. These images can be stored on floppy disks, hard drives, rewritable optical disks, recordable CD-ROM disks, or removable cartridge media. Most hospital computer information systems include radiology reports in their database. We demonstrate that the reports for the images included in the users teaching file can be copied and stored on the same storage media as the images. The radiographic or sonographic image and the corresponding dictated report can then be 'linked' together. The description of the finding or findings of interest on the digitized image is thus electronically tethered to the image. This obviates the need to write much additional detail concerning the radiograph, saving time. In addition, the text on this disk can be indexed such that all files with user specified features can be instantly retrieve and combined in a single report, if desired. With the use of newer image compression techniques, hundreds of cases may be stored on a single CD-ROM depending on the quality of image required for the finding in question. This reduces the weight of a teaching file from that of a baby elephant to that of a single CD-ROM disc. Thus, with this method of teaching file preparation and storage the following advantages are realized: (1) Technically easier and less time consuming image reproduction. (2) Considerably less unwieldy and substantially more portable teaching files. (3) Novel ability to index files and then retrieve specific cases of choice based on descriptive text.

  16. Cosmology with the Large Synoptic Survey Telescope: an overview.

    PubMed

    Zhan, Hu; Anthony Tyson, J

    2018-06-01

    The Large Synoptic Survey Telescope (LSST) is a high étendue imaging facility that is being constructed atop Cerro Pachón in northern Chile. It is scheduled to begin science operations in 2022. With an [Formula: see text] ([Formula: see text] effective) aperture, a novel three-mirror design achieving a seeing-limited [Formula: see text] field of view, and a 3.2 gigapixel camera, the LSST has the deep-wide-fast imaging capability necessary to carry out an [Formula: see text] survey in six passbands (ugrizy) to a coadded depth of [Formula: see text] over 10 years using [Formula: see text] of its observational time. The remaining [Formula: see text] of the time will be devoted to considerably deeper and faster time-domain observations and smaller surveys. In total, each patch of the sky in the main survey will receive 800 visits allocated across the six passbands with [Formula: see text] exposure visits. The huge volume of high-quality LSST data will provide a wide range of science opportunities and, in particular, open a new era of precision cosmology with unprecedented statistical power and tight control of systematic errors. In this review, we give a brief account of the LSST cosmology program with an emphasis on dark energy investigations. The LSST will address dark energy physics and cosmology in general by exploiting diverse precision probes including large-scale structure, weak lensing, type Ia supernovae, galaxy clusters, and strong lensing. Combined with the cosmic microwave background data, these probes form interlocking tests on the cosmological model and the nature of dark energy in the presence of various systematics. The LSST data products will be made available to the US and Chilean scientific communities and to international partners with no proprietary period. Close collaborations with contemporaneous imaging and spectroscopy surveys observing at a variety of wavelengths, resolutions, depths, and timescales will be a vital part of the LSST science program, which will not only enhance specific studies but, more importantly, also allow a more complete understanding of the Universe through different windows.

  17. Image classification of human carcinoma cells using complex wavelet-based covariance descriptors.

    PubMed

    Keskin, Furkan; Suhre, Alexander; Kose, Kivanc; Ersahin, Tulin; Cetin, A Enis; Cetin-Atalay, Rengul

    2013-01-01

    Cancer cell lines are widely used for research purposes in laboratories all over the world. Computer-assisted classification of cancer cells can alleviate the burden of manual labeling and help cancer research. In this paper, we present a novel computerized method for cancer cell line image classification. The aim is to automatically classify 14 different classes of cell lines including 7 classes of breast and 7 classes of liver cancer cells. Microscopic images containing irregular carcinoma cell patterns are represented by subwindows which correspond to foreground pixels. For each subwindow, a covariance descriptor utilizing the dual-tree complex wavelet transform (DT-[Formula: see text]WT) coefficients and several morphological attributes are computed. Directionally selective DT-[Formula: see text]WT feature parameters are preferred primarily because of their ability to characterize edges at multiple orientations which is the characteristic feature of carcinoma cell line images. A Support Vector Machine (SVM) classifier with radial basis function (RBF) kernel is employed for final classification. Over a dataset of 840 images, we achieve an accuracy above 98%, which outperforms the classical covariance-based methods. The proposed system can be used as a reliable decision maker for laboratory studies. Our tool provides an automated, time- and cost-efficient analysis of cancer cell morphology to classify different cancer cell lines using image-processing techniques, which can be used as an alternative to the costly short tandem repeat (STR) analysis. The data set used in this manuscript is available as supplementary material through http://signal.ee.bilkent.edu.tr/cancerCellLineClassificationSampleImages.html.

  18. Hiding Techniques for Dynamic Encryption Text based on Corner Point

    NASA Astrophysics Data System (ADS)

    Abdullatif, Firas A.; Abdullatif, Alaa A.; al-Saffar, Amna

    2018-05-01

    Hiding technique for dynamic encryption text using encoding table and symmetric encryption method (AES algorithm) is presented in this paper. The encoding table is generated dynamically from MSB of the cover image points that used as the first phase of encryption. The Harris corner point algorithm is applied on cover image to generate the corner points which are used to generate dynamic AES key to second phase of text encryption. The embedded process in the LSB for the image pixels except the Harris corner points for more robust. Experimental results have demonstrated that the proposed scheme have embedding quality, error-free text recovery, and high value in PSNR.

  19. The Western Civilization Videodisc (Second Edition), CD-ROM, and Master Guide [Multimedia.

    ERIC Educational Resources Information Center

    1996

    This resource represents a virtual library of still and moving images, documents, maps, sound clips and text which make up the history of Western Civilization from prehistoric times to the early 1990s. The interdisciplinary range of materials included is compatible with standard textbooks in middle and high school social science, social studies,…

  20. Interdisciplinarity and the Two Cultures in [image ommited]--Approaches in a Greek Science Magazine in the 1970s

    ERIC Educational Resources Information Center

    Rentzos, Ioannis

    2005-01-01

    The contents of the Greek magazine "Physicos Cosmos" include science popularization, teaching proposals, and issues of educational concern. The magazine is addressed to teachers of physics and, consequently, to grammar-school pupils/students. Its articles ranged, in general, from short texts taken from physical sciences to more specialized…

  1. I'm Ready for My Close-up Now: Electronic Portfolios and How We Read Identity

    ERIC Educational Resources Information Center

    Williams, Bronwyn T.

    2007-01-01

    Electronic portfolios allow students to include video, images, hyperlinks, and audio, along with written texts, to create varied and comprehensive representations of what they have accomplished. Such collections of information also change the ways that teachers and students choose to present themselves and the ways that we read these presentations…

  2. Planning the National Agricultural Library's Multimedia CD-ROM "Ornamental Horticulture."

    ERIC Educational Resources Information Center

    Mason, Pamela R.

    1991-01-01

    Discussion of issues involved in planning a multimedia CD-ROM product explains the selection of authoring tools, the design of a user interface, expert systems, text conversion and capture (including scanning and optical character recognition), and problems associated with image files. The use of audio is also discussed, and a 14-item glossary is…

  3. Multimedia Environments in Mathematics Teacher Education: Preparing Regular and Special Educators for Inclusive Classrooms

    ERIC Educational Resources Information Center

    De La Paz, Susan; Hernandez-Ramos, Pedro; Barron, Linda

    2004-01-01

    A multimedia CD-ROM program, Mathematics Teaching and Learning in Inclusive Classrooms, was produced to help preservice teachers learn mathematics teaching methods in the context of inclusive classrooms. The contents include text resources, video segments of experts and of classroom lessons, images of student work, an electronic notebook, and a…

  4. Bringing Women in: Gender and American Government and Politics Textbooks

    ERIC Educational Resources Information Center

    Olivo, Christiane

    2012-01-01

    This study of 12 introductory American government and politics textbooks shows that their main narratives still focus largely on men's experiences as political actors and pay little attention to women's experiences. While on average just 9% of pages included in-text references to women, 28% of images and 17% of sidebars, tables, figures, and…

  5. PRISMA-MAR: An Architecture Model for Data Visualization in Augmented Reality Mobile Devices

    ERIC Educational Resources Information Center

    Gomes Costa, Mauro Alexandre Folha; Serique Meiguins, Bianchi; Carneiro, Nikolas S.; Gonçalves Meiguins, Aruanda Simões

    2013-01-01

    This paper proposes an extension to mobile augmented reality (MAR) environments--the addition of data charts to the more usual text, image and video components. To this purpose, we have designed a client-server architecture including the main necessary modules and services to provide an Information Visualization MAR experience. The server side…

  6. Mobile medical image retrieval

    NASA Astrophysics Data System (ADS)

    Duc, Samuel; Depeursinge, Adrien; Eggel, Ivan; Müller, Henning

    2011-03-01

    Images are an integral part of medical practice for diagnosis, treatment planning and teaching. Image retrieval has gained in importance mainly as a research domain over the past 20 years. Both textual and visual retrieval of images are essential. In the process of mobile devices becoming reliable and having a functionality equaling that of formerly desktop clients, mobile computing has gained ground and many applications have been explored. This creates a new field of mobile information search & access and in this context images can play an important role as they often allow understanding complex scenarios much quicker and easier than free text. Mobile information retrieval in general has skyrocketed over the past year with many new applications and tools being developed and all sorts of interfaces being adapted to mobile clients. This article describes constraints of an information retrieval system including visual and textual information retrieval from the medical literature of BioMedCentral and of the RSNA journals Radiology and Radiographics. Solutions for mobile data access with an example on an iPhone in a web-based environment are presented as iPhones are frequently used and the operating system is bound to become the most frequent smartphone operating system in 2011. A web-based scenario was chosen to allow for a use by other smart phone platforms such as Android as well. Constraints of small screens and navigation with touch screens are taken into account in the development of the application. A hybrid choice had to be taken to allow for taking pictures with the cell phone camera and upload them for visual similarity search as most producers of smart phones block this functionality to web applications. Mobile information access and in particular access to images can be surprisingly efficient and effective on smaller screens. Images can be read on screen much faster and relevance of documents can be identified quickly through the use of images contained in the text. Problems with the many, often incompatible mobile platforms were discovered and are listed in the text. Mobile information access is a quickly growing domain and the constraints of mobile access also need to be taken into account for image retrieval. The demonstrated access to the medical literature is most relevant as the medical literature and their images are clearly the largest knowledge source in the medical field.

  7. Text, Graphics, and Multimedia Materials Employed in Learning a Computer-Based Procedural Task

    ERIC Educational Resources Information Center

    Coffindaffer, Kari Christine Carlson

    2010-01-01

    The present research study investigated the interaction of graphic design students with different forms of software training materials. Four versions of the procedural task instructions were developed (A) Traditional Textbook with Still Images, (B) Modified Text with Integrated Still Images, (C) Onscreen Modified Text with Silent Onscreen Video…

  8. Exposure to graphic warning labels on cigarette packages: Effects on implicit and explicit attitudes towards smoking among young adults.

    PubMed

    Macy, Jonathan T; Chassin, Laurie; Presson, Clark C; Yeung, Ellen

    2016-01-01

    To test the effect of exposure to the US Food and Drug Administration's proposed graphic images with text warning statements for cigarette packages on implicit and explicit attitudes towards smoking. A two-session web-based study was conducted with 2192 young adults 18-25-years-old. During session one, demographics, smoking behaviour, and baseline implicit and explicit attitudes were assessed. Session two, completed on average 18 days later, contained random assignment to viewing one of three sets of cigarette packages, graphic images with text warnings, text warnings only, or current US Surgeon General's text warnings. Participants then completed post-exposure measures of implicit and explicit attitudes. ANCOVAs tested the effect of condition on the outcomes, controlling for baseline attitudes. Smokers who viewed packages with graphic images plus text warnings demonstrated more negative implicit attitudes compared to smokers in the other conditions (p = .004). For the entire sample, explicit attitudes were more negative for those who viewed graphic images plus text warnings compared to those who viewed current US Surgeon General's text warnings (p = .014), but there was no difference compared to those who viewed text-only warnings. Graphic health warnings on cigarette packages can influence young adult smokers' implicit attitudes towards smoking.

  9. Recommendations for the Use of Ultrasound and Magnetic Resonance in Patients With Spondyloarthritis, Including Psoriatic Arthritis, and Patients With Juvenile Idiopathic Arthritis.

    PubMed

    Uson, Jacqueline; Loza, Estibaliz; Möller, Ingrid; Acebes, Carlos; Andreu, Jose Luis; Batlle, Enrique; Bueno, Ángel; Collado, Paz; Fernández-Gallardo, Juan Manuel; González, Carlos; Jiménez Palop, Mercedes; Lisbona, María Pilar; Macarrón, Pilar; Maymó, Joan; Narváez, Jose Antonio; Navarro-Compán, Victoria; Sanz, Jesús; Rosario, M Piedad; Vicente, Esther; Naredo, Esperanza

    To develop evidence-based recommendations on the use of ultrasound (US) and magnetic resonance imaging in patients with spondyloarthritis, including psoriatic arthritis, and juvenile idiopathic arthritis. Recommendations were generated following a nominal group technique. A panel of experts (15 rheumatologists and 3 radiologists) was established in the first panel meeting to define the scope and purpose of the consensus document, as well as chapters, potential recommendations and systematic literature reviews (we used and updated those from previous EULAR documents). A first draft of recommendations and text was generated. Then, an electronic Delphi process (2 rounds) was carried out. Recommendations were voted from 1 (total disagreement) to 10 (total agreement). We defined agreement if at least 70% of participants voted≥7. The level of evidence and grade or recommendation was assessed using the Oxford Centre for Evidence Based Medicine levels of evidence. The full text was circulated and reviewed by the panel. The consensus was coordinated by an expert methodologist. A total of 12 recommendations were proposed for each disease. They include, along with explanations of the validity of US and magnetic resonance imaging regarding inflammation and damage detection, diagnosis, prediction (structural damage progression, flare, treatment response, etc.), monitoring and the use of US guided injections/biopsies. These recommendations will help clinicians use US and magnetic resonance imaging in patients with spondyloarthritis and juvenile idiopathic arthritis. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Reumatología y Colegio Mexicano de Reumatología. All rights reserved.

  10. Parenchymal texture analysis in digital mammography: robust texture feature identification and equivalence across devices.

    PubMed

    Keller, Brad M; Oustimov, Andrew; Wang, Yan; Chen, Jinbo; Acciavatti, Raymond J; Zheng, Yuanjie; Ray, Shonket; Gee, James C; Maidment, Andrew D A; Kontos, Despina

    2015-04-01

    An analytical framework is presented for evaluating the equivalence of parenchymal texture features across different full-field digital mammography (FFDM) systems using a physical breast phantom. Phantom images (FOR PROCESSING) are acquired from three FFDM systems using their automated exposure control setting. A panel of texture features, including gray-level histogram, co-occurrence, run length, and structural descriptors, are extracted. To identify features that are robust across imaging systems, a series of equivalence tests are performed on the feature distributions, in which the extent of their intersystem variation is compared to their intrasystem variation via the Hodges-Lehmann test statistic. Overall, histogram and structural features tend to be most robust across all systems, and certain features, such as edge enhancement, tend to be more robust to intergenerational differences between detectors of a single vendor than to intervendor differences. Texture features extracted from larger regions of interest (i.e., [Formula: see text]) and with a larger offset length (i.e., [Formula: see text]), when applicable, also appear to be more robust across imaging systems. This framework and observations from our experiments may benefit applications utilizing mammographic texture analysis on images acquired in multivendor settings, such as in multicenter studies of computer-aided detection and breast cancer risk assessment.

  11. Assessment of natural enamel lesions with optical coherence tomography in comparison with microfocus x-ray computed tomography.

    PubMed

    Espigares, Jorge; Sadr, Alireza; Hamba, Hidenori; Shimada, Yasushi; Otsuki, Masayuki; Tagami, Junji; Sumi, Yasunori

    2015-01-01

    A technology to characterize early enamel lesions is needed in dentistry. Optical coherence tomography (OCT) is a noninvasive method that provides high-resolution cross-sectional images. The aim of this study is to compare OCT with microfocus x-ray computed tomography ([Formula: see text]) for assessment of natural enamel lesions in vitro. Ten human teeth with visible white spot-like changes on the enamel smooth surface and no cavitation (ICDAS code 2) were subjected to imaging by μCT (SMX-100CT, Shimadzu) and 1300-nm swept-source OCT (Dental SS-OCT, Panasonic Health Care). In [Formula: see text], the lesions appeared as radiolucent dark areas, while in SS-OCT, they appeared as areas of increased signal intensity beneath the surface. An SS-OCT attenuation coefficient based on Beer-Lambert law could discriminate lesions from sound enamel. Lesion depth ranged from 175 to [Formula: see text] in SS-OCT. A correlation between [Formula: see text] and SS-OCT was found regarding lesion depth ([Formula: see text], [Formula: see text]) and also surface layer thickness ([Formula: see text], [Formula: see text]). The images obtained clinically in real time using the dental SS-OCT system are suitable for the assessment of natural subsurface lesions and their surface layer, providing comparable images to a laboratory high-resolution [Formula: see text] without the use of x-ray.

  12. A text input system developed by using lips image recognition based LabVIEW for the seriously disabled.

    PubMed

    Chen, S C; Shao, C L; Liang, C K; Lin, S W; Huang, T H; Hsieh, M C; Yang, C H; Luo, C H; Wuo, C M

    2004-01-01

    In this paper, we present a text input system for the seriously disabled by using lips image recognition based on LabVIEW. This system can be divided into the software subsystem and the hardware subsystem. In the software subsystem, we adopted the technique of image processing to recognize the status of mouth-opened or mouth-closed depending the relative distance between the upper lip and the lower lip. In the hardware subsystem, parallel port built in PC is used to transmit the recognized result of mouth status to the Morse-code text input system. Integrating the software subsystem with the hardware subsystem, we implement a text input system by using lips image recognition programmed in LabVIEW language. We hope the system can help the seriously disabled to communicate with normal people more easily.

  13. Recovery of handwritten text from the diaries and papers of David Livingstone

    NASA Astrophysics Data System (ADS)

    Knox, Keith T.; Easton, Roger L., Jr.; Christens-Barry, William A.; Boydston, Kenneth

    2011-03-01

    During his explorations of Africa, David Livingstone kept a diary and wrote letters about his experiences. Near the end of his travels, he ran out of paper and ink and began recording his thoughts on leftover newspaper with ink made from local seeds. These writings suffer from fading, from interference with the printed text and from bleed through of the handwriting on the other side of the paper, making them hard to read. New image processing techniques have been developed to deal with these papers to make Livingstone's handwriting available to the scholars to read. A scan of the David Livingstone's papers was made using a twelve-wavelength, multispectral imaging system. The wavelengths ranged from the ultraviolet to the near infrared. In these wavelengths, the three different types of writing behave differently, making them distinguishable from each other. So far, three methods have been used to recover Livingstone's handwriting. These include pseudocolor (to make the different writings distinguishable), spectral band ratios (to remove text that does not change), and principal components analysis (to separate the different writings). In initial trials, these techniques have been able to lift handwriting off printed text and have suppressed handwriting that has bled through from the other side of the paper.

  14. Visualization index for image-enabled medical records

    NASA Astrophysics Data System (ADS)

    Dong, Wenjie; Zheng, Weilin; Sun, Jianyong; Zhang, Jianguo

    2011-03-01

    With the widely use of healthcare information technology in hospitals, the patients' medical records are more and more complex. To transform the text- or image-based medical information into easily understandable and acceptable form for human, we designed and developed an innovation indexing method which can be used to assign an anatomical 3D structure object to every patient visually to store indexes of the patients' basic information, historical examined image information and RIS report information. When a doctor wants to review patient historical records, he or she can first load the anatomical structure object and the view the 3D index of this object using a digital human model tool kit. This prototype system helps doctors to easily and visually obtain the complete historical healthcare status of patients, including large amounts of medical data, and quickly locate detailed information, including both reports and images, from medical information systems. In this way, doctors can save time that may be better used to understand information, obtain a more comprehensive understanding of their patients' situations, and provide better healthcare services to patients.

  15. Landmark Image Retrieval by Jointing Feature Refinement and Multimodal Classifier Learning.

    PubMed

    Zhang, Xiaoming; Wang, Senzhang; Li, Zhoujun; Ma, Shuai; Xiaoming Zhang; Senzhang Wang; Zhoujun Li; Shuai Ma; Ma, Shuai; Zhang, Xiaoming; Wang, Senzhang; Li, Zhoujun

    2018-06-01

    Landmark retrieval is to return a set of images with their landmarks similar to those of the query images. Existing studies on landmark retrieval focus on exploiting the geometries of landmarks for visual similarity matches. However, the visual content of social images is of large diversity in many landmarks, and also some images share common patterns over different landmarks. On the other side, it has been observed that social images usually contain multimodal contents, i.e., visual content and text tags, and each landmark has the unique characteristic of both visual content and text content. Therefore, the approaches based on similarity matching may not be effective in this environment. In this paper, we investigate whether the geographical correlation among the visual content and the text content could be exploited for landmark retrieval. In particular, we propose an effective multimodal landmark classification paradigm to leverage the multimodal contents of social image for landmark retrieval, which integrates feature refinement and landmark classifier with multimodal contents by a joint model. The geo-tagged images are automatically labeled for classifier learning. Visual features are refined based on low rank matrix recovery, and multimodal classification combined with group sparse is learned from the automatically labeled images. Finally, candidate images are ranked by combining classification result and semantic consistence measuring between the visual content and text content. Experiments on real-world datasets demonstrate the superiority of the proposed approach as compared to existing methods.

  16. MO-DE-BRA-06: 3D Image Acquisition and Reconstruction Explained with Online Animations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kesner, A

    Purpose: Understanding the principles of 3D imaging and image reconstruction is fundamental to the field of medical imaging. Clinicians, technologists, physicists, patients, students, and inquisitive minds all stand to benefit from greater comprehension of the supporting technologies. To help explain the basic principles of 3D imaging, we developed multi-frame animations that convey the concepts of tomographic imaging. The series of free (gif) animations are accessible online, and provide a multimedia introduction to the main concepts of image reconstruction. Methods: Text and animations were created to convey the principles of analytic tomography in CT, PET, and SPECT. Specific topics covered included:more » principles of sinograms/image data storage, forward projection, principles of PET acquisitions, and filtered backprojection. A total of 8 animations were created and presented for CT, PET, and digital phantom formats. In addition, a free executable is also provided to allow users to create their own tomographic animations – providing an opportunity for interaction and personalization to help foster user interest. Results: Tutorial text and animations have been posted online, freely available to view or download. The animations are in first position in a google search of “image reconstruction animations”. The website currently receives approximately 200 hits/month, from all over the world, and the usage is growing. Positive feedback has been collected from users. Conclusion: We identified a need for improved teaching tools to help visualize the (temporally variant) concepts of image reconstruction, and have shown that animations can be a useful tool for this aspect of education. Furthermore, posting animations freely on the web has shown to be a good way to maximize their impact in the community. In future endeavors, we hope to expand this animated content, to cover principles of iterative reconstruction, as well as other phenomena relating to imaging.« less

  17. Vaccine-Preventable Disease Photos

    MedlinePlus

    ... Typhoid fever HPV Polio Whooping cough Influenza (flu) Rabies Yellow fever Photo Library Photographs accompanied by text ... images Pneumococcus Three images Polio Twenty-six images Rabies Ten images Rotavirus Two images Rubella Fifteen images ...

  18. BaffleText: a Human Interactive Proof

    NASA Astrophysics Data System (ADS)

    Chew, Monica; Baird, Henry S.

    2003-01-01

    Internet services designed for human use are being abused by programs. We present a defense against such attacks in the form of a CAPTCHA (Completely Automatic Public Turing test to tell Computers and Humans Apart) that exploits the difference in ability between humans and machines in reading images of text. CAPTCHAs are a special case of 'human interactive proofs,' a broad class of security protocols that allow people to identify themselves over networks as members of given groups. We point out vulnerabilities of reading-based CAPTCHAs to dictionary and computer-vision attacks. We also draw on the literature on the psychophysics of human reading, which suggests fresh defenses available to CAPTCHAs. Motivated by these considerations, we propose BaffleText, a CAPTCHA which uses non-English pronounceable words to defend against dictionary attacks, and Gestalt-motivated image-masking degradations to defend against image restoration attacks. Experiments on human subjects confirm the human legibility and user acceptance of BaffleText images. We have found an image-complexity measure that correlates well with user acceptance and assists in engineering the generation of challenges to fit the ability gap. Recent computer-vision attacks, run independently by Mori and Jitendra, suggest that BaffleText is stronger than two existing CAPTCHAs.

  19. Visual Images of Subjective Perception of Time in a Literary Text

    ERIC Educational Resources Information Center

    Nesterik, Ella V.; Issina, Gaukhar I.; Pecherskikh, Taliya F.; Belikova, Oxana V.

    2016-01-01

    The article is devoted to the subjective perception of time, or psychological time, as a text category and a literary image. It focuses on the visual images that are characteristic of different types of literary time--accelerated, decelerated and frozen (vanished). The research is based on the assumption that the category of subjective perception…

  20. Degraded Chinese rubbing images thresholding based on local first-order statistics

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Hou, Ling-Ying; Huang, Han

    2017-06-01

    It is a necessary step for Chinese character segmentation from degraded document images in Optical Character Recognizer (OCR); however, it is challenging due to various kinds of noising in such an image. In this paper, we present three local first-order statistics method that had been adaptive thresholding for segmenting text and non-text of Chinese rubbing image. Both visual inspection and numerically investigate for the segmentation results of rubbing image had been obtained. In experiments, it obtained better results than classical techniques in the binarization of real Chinese rubbing image and PHIBD 2012 datasets.

  1. Information system to manage anatomical knowledge and image data about brain

    NASA Astrophysics Data System (ADS)

    Barillot, Christian; Gibaud, Bernard; Montabord, E.; Garlatti, S.; Gauthier, N.; Kanellos, I.

    1994-09-01

    This paper reports about first results obtained in a project aiming at developing a computerized system to manage knowledge about brain anatomy. The emphasis is put on the design of a knowledge base which includes a symbolic model of cerebral anatomical structures (grey nuclei, cortical structures such as gyri and sulci, verntricles, vessels, etc.) and of hypermedia facilities allowing to retrieve and display information associated with the objects (texts, drawings, images). Atlas plates digitized from a stereotactic atlas are also used to provide natural and effective communication means between the user and the system.

  2. 3-D reservoir characterization of the House Creek oil field, Powder River Basin, Wyoming

    USGS Publications Warehouse

    Higley, Debra K.; Pantea, Michael P.; Slatt, Roger M.

    1997-01-01

    This CD-ROM is intended to serve a broad audience. An important purpose is to explain geologic and geochemical factors that control petroleum production from the House Creek Field. This information may serve as an analog for other marine-ridge sandstone reservoirs. The 3-D slide and movie images are tied to explanations and 2-D geologic and geochemical images to visualize geologic structures in three dimensions, explain the geologic significance of porosity/permeability distribution across the sandstone bodies, and tie this to petroleum production characteristics in the oil field. Movies, text, images including scanning electron photomicrographs (SEM), thin-section photomicrographs, and data files can be copied from the CD-ROM for use in external mapping, statistical, and other applications.

  3. Influence of Framing and Graphic Format on Comprehension of Risk Information among American Indian Tribal College Students

    PubMed Central

    Sprague, Debra; Russo, Joan E.; LaVallie, Donna L.; Buchwald, Dedra S.

    2012-01-01

    We evaluated methods for presenting risk information by administering 6 versions of an anonymous survey to 489 American Indian tribal college students. All surveys presented identical numeric information, but framing varied. Half expressed prevention benefits as relative risk reduction, half as absolute risk reduction. One-third of surveys used text to describe prevention benefits; 1/3 used text plus bar graph; 1/3 used text plus modified bar graph incorporating a culturally tailored image. The odds ratio (OR) for correct risk interpretation for absolute risk framing vs. relative risk framing was 1.40 (95% CI=1.01, 1.93). The OR for correct interpretation of text plus bar graph vs. text only was 2.16 (95% CI=1.46, 3.19); OR for text plus culturally tailored bar graph vs. text only was 1.72 (95% CI=1.14, 2.60). Risk information including a bar graph was better understood than text-only information; a culturally tailored graph was no more effective than a standard graph. PMID:22544538

  4. Traditional text-only versus multimedia-enhanced radiology reporting: referring physicians' perceptions of value.

    PubMed

    Sadigh, Gelareh; Hertweck, Timothy; Kao, Cristine; Wood, Paul; Hughes, Danny; Henry, Travis S; Duszak, Richard

    2015-05-01

    The aim of this study was to evaluate referring physicians' perceptions of multimedia-enhanced radiology reporting (MERR) as an alternative to traditional text-only radiology reporting. MERR supplements text-only reports by embedding user-friendly interactive hyperlinks to key images and graphically plotting target lesion size longitudinally over time. Of 402 physicians responding to a web-based survey, 200 (50 each medical oncologists, radiation oncologists, neurosurgeons, and pulmonologists) practicing in the United States fulfilled criteria to complete an online survey with questions focusing on satisfaction with current text-only reports and the perceived value of image- and data-enriched reporting. The mean respondent age was 46 years, with a mean of 15 years in posttraining clinical practice (85% men; 47% from academic medical centers). Although 80% were satisfied with the format of their current text-only radiology reports, 80% believed that MERR would represent an improvement. The most commonly reported advantages of MERR were "improved understanding of radiology findings by correlating images to text reports" (86%) and "easier access to images while monitoring progression of a disease/condition" (79%). Of the 28% of physicians with concerns about MERR implementation, the most common were that it was "too time intensive" (53%) and "the clinic workflow does not allow itself to view reports in such a fashion" (42%). Physicians indicated a strong increased likelihood of preferentially referring patients to (80%) and recommending peers to (79%) facilities that offer MERR. Most specialist referring physicians believe that interactive image- and data-embedded MERR represents an improvement over current text-only radiology reporting. Compared with current report formatting, most would preferentially refer patients and peers to facilities offering more meaningful image- and graphically enriched reporting platforms. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  5. Assessing the validity of discourse analysis: transdisciplinary convergence

    NASA Astrophysics Data System (ADS)

    Jaipal-Jamani, Kamini

    2014-12-01

    Research studies using discourse analysis approaches make claims about phenomena or issues based on interpretation of written or spoken text, which includes images and gestures. How are findings/interpretations from discourse analysis validated? This paper proposes transdisciplinary convergence as a way to validate discourse analysis approaches to research. The argument is made that discourse analysis explicitly grounded in semiotics, systemic functional linguistics, and critical theory, offers a credible research methodology. The underlying assumptions, constructs, and techniques of analysis of these three theoretical disciplines can be drawn on to show convergence of data at multiple levels, validating interpretations from text analysis.

  6. Generating Text from Functional Brain Images

    PubMed Central

    Pereira, Francisco; Detre, Greg; Botvinick, Matthew

    2011-01-01

    Recent work has shown that it is possible to take brain images acquired during viewing of a scene and reconstruct an approximation of the scene from those images. Here we show that it is also possible to generate text about the mental content reflected in brain images. We began with images collected as participants read names of concrete items (e.g., “Apartment’’) while also seeing line drawings of the item named. We built a model of the mental semantic representation of concrete concepts from text data and learned to map aspects of such representation to patterns of activation in the corresponding brain image. In order to validate this mapping, without accessing information about the items viewed for left-out individual brain images, we were able to generate from each one a collection of semantically pertinent words (e.g., “door,” “window” for “Apartment’’). Furthermore, we show that the ability to generate such words allows us to perform a classification task and thus validate our method quantitatively. PMID:21927602

  7. Harvesting geographic features from heterogeneous raster maps

    NASA Astrophysics Data System (ADS)

    Chiang, Yao-Yi

    2010-11-01

    Raster maps offer a great deal of geospatial information and are easily accessible compared to other geospatial data. However, harvesting geographic features locked in heterogeneous raster maps to obtain the geospatial information is challenging. This is because of the varying image quality of raster maps (e.g., scanned maps with poor image quality and computer-generated maps with good image quality), the overlapping geographic features in maps, and the typical lack of metadata (e.g., map geocoordinates, map source, and original vector data). Previous work on map processing is typically limited to a specific type of map and often relies on intensive manual work. In contrast, this thesis investigates a general approach that does not rely on any prior knowledge and requires minimal user effort to process heterogeneous raster maps. This approach includes automatic and supervised techniques to process raster maps for separating individual layers of geographic features from the maps and recognizing geographic features in the separated layers (i.e., detecting road intersections, generating and vectorizing road geometry, and recognizing text labels). The automatic technique eliminates user intervention by exploiting common map properties of how road lines and text labels are drawn in raster maps. For example, the road lines are elongated linear objects and the characters are small connected-objects. The supervised technique utilizes labels of road and text areas to handle complex raster maps, or maps with poor image quality, and can process a variety of raster maps with minimal user input. The results show that the general approach can handle raster maps with varying map complexity, color usage, and image quality. By matching extracted road intersections to another geospatial dataset, we can identify the geocoordinates of a raster map and further align the raster map, separated feature layers from the map, and recognized features from the layers with the geospatial dataset. The road vectorization and text recognition results outperform state-of-art commercial products, and with considerably less user input. The approach in this thesis allows us to make use of the geospatial information of heterogeneous maps locked in raster format.

  8. Social Image Tag Ranking by Two-View Learning

    NASA Astrophysics Data System (ADS)

    Zhuang, Jinfeng; Hoi, Steven C. H.

    Tags play a central role in text-based social image retrieval and browsing. However, the tags annotated by web users could be noisy, irrelevant, and often incomplete for describing the image contents, which may severely deteriorate the performance of text-based image retrieval models. In order to solve this problem, researchers have proposed techniques to rank the annotated tags of a social image according to their relevance to the visual content of the image. In this paper, we aim to overcome the challenge of social image tag ranking for a corpus of social images with rich user-generated tags by proposing a novel two-view learning approach. It can effectively exploit both textual and visual contents of social images to discover the complicated relationship between tags and images. Unlike the conventional learning approaches that usually assumes some parametric models, our method is completely data-driven and makes no assumption about the underlying models, making the proposed solution practically more effective. We formulate our method as an optimization task and present an efficient algorithm to solve it. To evaluate the efficacy of our method, we conducted an extensive set of experiments by applying our technique to both text-based social image retrieval and automatic image annotation tasks. Our empirical results showed that the proposed method can be more effective than the conventional approaches.

  9. Pulse wave imaging using coherent compounding in a phantom and in vivo

    NASA Astrophysics Data System (ADS)

    Zacharias Apostolakis, Iason; McGarry, Matthew D. J.; Bunting, Ethan A.; Konofagou, Elisa E.

    2017-03-01

    Pulse wave velocity (PWV) is a surrogate marker of arterial stiffness linked to cardiovascular morbidity. Pulse wave imaging (PWI) is a technique developed by our group for imaging the pulse wave propagation in vivo. PWI requires high temporal and spatial resolution, which conventional ultrasonic imaging is unable to simultaneously provide. Coherent compounding is known to address this tradeoff and provides full aperture images at high frame rates. This study aims to implement PWI using coherent compounding within a GPU-accelerated framework. The results of the implemented method were validated using a silicone phantom against static mechanical testing. Reproducibility of the measured PWVs was assessed in the right common carotid of six healthy subjects (n  =  6) approximately 10-15 mm before the bifurcation during two cardiac cycles over the course of 1-3 d. Good agreement of the measured PWVs (3.97  ±  1.21 m s-1, 4.08  ±  1.15 m s-1, p  =  0.74) was obtained. The effects of frame rate, transmission angle and number of compounded plane waves on PWI performance were investigated in the six healthy volunteers. Performance metrics such as the reproducibility of the PWVs, the coefficient of determination (r 2), the SNR of the PWI axial wall velocities (\\text{SN}{{\\text{R}}{{\\text{v}_{\\text{PWI}}}}} ) and the percentage of lateral positions where the pulse wave appears to arrive at the same time-point, indicating inadequacy of the temporal resolution (i.e. temporal resolution misses) were used to evaluate the effect of each parameter. Compounding plane waves transmitted at 1° increments with a linear array yielded optimal performance, generating significantly higher r 2 and \\text{SN}{{\\text{R}}{{\\text{v}_{\\text{PWI}}}}} values (p  ⩽  0.05). Higher frame rates (⩾1667 Hz) produced improvements with significant gains in the r 2 coefficient (p  ⩽  0.05) and significant increase in both r 2 and \\text{SN}{{\\text{R}}{{\\text{v}_{\\text{PWI}}}}} from single plane wave imaging to 3-plane wave compounding (p  ⩽  0.05). Optimal performance was established at 2778 Hz with 3 plane waves and at 1667 Hz with 5 plane waves.

  10. Current perspectives in the use of molecular imaging to target surgical treatments for genitourinary cancers.

    PubMed

    Greco, Francesco; Cadeddu, Jeffrey A; Gill, Inderbir S; Kaouk, Jihad H; Remzi, Mesut; Thompson, R Houston; van Leeuwen, Fijs W B; van der Poel, Henk G; Fornara, Paolo; Rassweiler, Jens

    2014-05-01

    Molecular imaging (MI) entails the visualisation, characterisation, and measurement of biologic processes at the molecular and cellular levels in humans and other living systems. Translating this technology to interventions in real-time enables interventional MI/image-guided surgery, for example, by providing better detection of tumours and their dimensions. To summarise and critically analyse the available evidence on image-guided surgery for genitourinary (GU) oncologic diseases. A comprehensive literature review was performed using PubMed and the Thomson Reuters Web of Science. In the free-text protocol, the following terms were applied: molecular imaging, genitourinary oncologic surgery, surgical navigation, image-guided surgery, and augmented reality. Review articles, editorials, commentaries, and letters to the editor were included if deemed to contain relevant information. We selected 79 articles according to the search strategy based on the Preferred Reporting Items for Systematic Reviews and Meta-analysis criteria and the IDEAL method. MI techniques included optical imaging and fluorescent techniques, the augmented reality (AR) navigation system, magnetic resonance imaging spectroscopy, positron emission tomography, and single-photon emission computed tomography. Experimental studies on the AR navigation system were restricted to the detection and therapy of adrenal and renal malignancies and in the relatively infrequent cases of prostate cancer, whereas fluorescence techniques and optical imaging presented a wide application of intraoperative GU oncologic surgery. In most cases, image-guided surgery was shown to improve the surgical resectability of tumours. Based on the evidence to date, image-guided surgery has promise in the near future for multiple GU malignancies. Further optimisation of targeted imaging agents, along with the integration of imaging modalities, is necessary to further enhance intraoperative GU oncologic surgery. Copyright © 2013 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  11. Rotation-invariant features for multi-oriented text detection in natural images.

    PubMed

    Yao, Cong; Zhang, Xin; Bai, Xiang; Liu, Wenyu; Ma, Yi; Tu, Zhuowen

    2013-01-01

    Texts in natural scenes carry rich semantic information, which can be used to assist a wide range of applications, such as object recognition, image/video retrieval, mapping/navigation, and human computer interaction. However, most existing systems are designed to detect and recognize horizontal (or near-horizontal) texts. Due to the increasing popularity of mobile-computing devices and applications, detecting texts of varying orientations from natural images under less controlled conditions has become an important but challenging task. In this paper, we propose a new algorithm to detect texts of varying orientations. Our algorithm is based on a two-level classification scheme and two sets of features specially designed for capturing the intrinsic characteristics of texts. To better evaluate the proposed method and compare it with the competing algorithms, we generate a comprehensive dataset with various types of texts in diverse real-world scenes. We also propose a new evaluation protocol, which is more suitable for benchmarking algorithms for detecting texts in varying orientations. Experiments on benchmark datasets demonstrate that our system compares favorably with the state-of-the-art algorithms when handling horizontal texts and achieves significantly enhanced performance on variant texts in complex natural scenes.

  12. Chemopreventive Agent Development | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"174","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Chemoprevenentive Agent Development Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Chemoprevenentive Agent Development Research Group Homepage

  13. Improved document image segmentation algorithm using multiresolution morphology

    NASA Astrophysics Data System (ADS)

    Bukhari, Syed Saqib; Shafait, Faisal; Breuel, Thomas M.

    2011-01-01

    Page segmentation into text and non-text elements is an essential preprocessing step before optical character recognition (OCR) operation. In case of poor segmentation, an OCR classification engine produces garbage characters due to the presence of non-text elements. This paper describes modifications to the text/non-text segmentation algorithm presented by Bloomberg,1 which is also available in his open-source Leptonica library.2The modifications result in significant improvements and achieved better segmentation accuracy than the original algorithm for UW-III, UNLV, ICDAR 2009 page segmentation competition test images and circuit diagram datasets.

  14. Visual affective classification by combining visual and text features.

    PubMed

    Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming

    2017-01-01

    Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task.

  15. Visual affective classification by combining visual and text features

    PubMed Central

    Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming

    2017-01-01

    Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task. PMID:28850566

  16. Data-Base Software For Tracking Technological Developments

    NASA Technical Reports Server (NTRS)

    Aliberti, James A.; Wright, Simon; Monteith, Steve K.

    1996-01-01

    Technology Tracking System (TechTracS) computer program developed for use in storing and retrieving information on technology and related patent information developed under auspices of NASA Headquarters and NASA's field centers. Contents of data base include multiple scanned still images and quick-time movies as well as text. TechTracS includes word-processing, report-editing, chart-and-graph-editing, and search-editing subprograms. Extensive keyword searching capabilities enable rapid location of technologies, innovators, and companies. System performs routine functions automatically and serves multiple users.

  17. Designing Multimedia Learning Systems for Adult Learners: Basic Skills with a Workforce Emphasis. NCAL Working Paper.

    ERIC Educational Resources Information Center

    Sabatini, John P.

    An analysis was conducted of the results of a formative evaluation of the LiteracyLink "Workplace Essential Skills" (WES) learning system conducted in the fall of 1998. (The WES learning system is a multimedia learning system integrating text, sound, graphics, animation, video, and images in a computer system and includes a videotape series, a…

  18. The World History Videodisc, CD-ROM, and Master Guide: Non-European History [Multimedia.

    ERIC Educational Resources Information Center

    1996

    This resource represents a virtual library of still and moving images, documents, maps, sound clips and text which make up the history of the non-European world from prehistoric times to the early 1990s. The interdisciplinary range of materials included is compatible with standard textbooks in middle and high school social science, social studies,…

  19. Exposure to Graphic Warning Labels on Cigarette Packages: Effects on Implicit and Explicit Attitudes toward Smoking among Young Adults

    PubMed Central

    Macy, Jonathan T.; Chassin, Laurie; Presson, Clark C.; Yeung, Ellen

    2015-01-01

    Objective Test the effect of exposure to the U.S. Food and Drug Administration’s proposed graphic images with text warning statements for cigarette packages on implicit and explicit attitudes toward smoking. Design and methods A two-session web-based study was conducted with 2192 young adults 18–25 years old. During session one, demographics, smoking behavior, and baseline implicit and explicit attitudes were assessed. Session two, completed on average 18 days later, contained random assignment to viewing one of three sets of cigarette packages, graphic images with text warnings, text warnings only, or current U.S Surgeon General’s text warnings. Participants then completed post-exposure measures of implicit and explicit attitudes. ANCOVAs tested the effect of condition on the outcomes, controlling for baseline attitudes. Results Smokers who viewed packages with graphic images plus text warnings demonstrated more negative implicit attitudes compared to smokers in the other conditions (p=.004). For the entire sample, explicit attitudes were more negative for those who viewed graphic images plus text warnings compared to those who viewed current U.S. Surgeon General’s text warnings (p=.014), but there was no difference compared to those who viewed text-only warnings. Conclusion Graphic health warnings on cigarette packages can influence young adult smokers’ implicit attitudes toward smoking. PMID:26442992

  20. Schemes for Integrating Text and Image in the Science Textbook: Effects on Comprehension and Situational Interest

    ERIC Educational Resources Information Center

    Peterson, Matthew O.

    2016-01-01

    Science education researchers have turned their attention to the use of images in textbooks, both because pages are heavily illustrated and because visual literacy is an important aptitude for science students. Text-image integration in the textbook is described here as composition schemes in increasing degrees of integration: prose primary (PP),…

  1. Cigarette Graphic Warning Labels Are Not Created Equal: They Can Increase or Decrease Smokers' Quit Intentions Relative to Text-Only Warnings.

    PubMed

    Evans, Abigail T; Peters, Ellen; Shoben, Abigail B; Meilleur, Louise R; Klein, Elizabeth G; Tompkins, Mary Kate; Romer, Daniel; Tusler, Martin

    2017-10-01

    Cigarette graphic-warning labels elicit negative emotion. Research suggests negative emotion drives greater risk perceptions and quit intentions through multiple processes. The present research compares text-only warning effectiveness to that of graphic warnings eliciting more or less negative emotion. Nationally representative online panels of 736 adult smokers and 469 teen smokers/vulnerable smokers were randomly assigned to view one of three warning types (text-only, text with low-emotion images, or text with high-emotion images) four times over 2 weeks. Participants recorded their emotional reaction to the warnings (measured as arousal), smoking risk perceptions, and quit intentions. Primary analyses used structural equation modeling. Participants in the high-emotion condition reported greater emotional reaction than text-only participants (bAdult = 0.21; bTeen = 0.27, p's < .004); those in the low-emotion condition reported lower emotional reaction than text-only participants (bAdult = -0.18; bTeen = -0.22, p's < .018). Stronger emotional reaction was associated with increased risk perceptions in both samples (bAdult = 0.66; bTeen = 0.85, p's < .001) and greater quit intentions among adults (bAdult = 1.00, p < .001). Compared to text-only warnings, low-emotion warnings were associated with reduced risk perceptions and quit intentions whereas high-emotion warnings were associated with increased risk perceptions and quit intentions. Warning labels with images that elicit more negative emotional reaction are associated with increased risk perceptions and quit intentions in adults and teens relative to text-only warnings. However, graphic warnings containing images which evoke little emotional reaction can backfire and reduce risk perceptions and quit intentions versus text-only warnings. This research is the first to directly manipulate two emotion levels in sets of nine cigarette graphic warning images and compare them with text-only warnings. Among adult and teen smokers, high-emotion graphic warnings were associated with increased risk perceptions and quit intentions versus text-only warnings. Low-emotion graphic warnings backfired and tended to reduce risk perceptions and quit intentions versus text-only warnings. Policy makers should be aware that merely placing images on cigarette packaging is insufficient to increase smokers' risk perceptions and quit intentions. Low-emotion graphic warnings will not necessarily produce desired population-level benefits relative to text-only or high-emotion warnings. © The Author 2016. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Combining multiple thresholding binarization values to improve OCR output

    NASA Astrophysics Data System (ADS)

    Lund, William B.; Kennard, Douglas J.; Ringger, Eric K.

    2013-01-01

    For noisy, historical documents, a high optical character recognition (OCR) word error rate (WER) can render the OCR text unusable. Since image binarization is often the method used to identify foreground pixels, a body of research seeks to improve image-wide binarization directly. Instead of relying on any one imperfect binarization technique, our method incorporates information from multiple simple thresholding binarizations of the same image to improve text output. Using a new corpus of 19th century newspaper grayscale images for which the text transcription is known, we observe WERs of 13.8% and higher using current binarization techniques and a state-of-the-art OCR engine. Our novel approach combines the OCR outputs from multiple thresholded images by aligning the text output and producing a lattice of word alternatives from which a lattice word error rate (LWER) is calculated. Our results show a LWER of 7.6% when aligning two threshold images and a LWER of 6.8% when aligning five. From the word lattice we commit to one hypothesis by applying the methods of Lund et al. (2011) achieving an improvement over the original OCR output and a 8.41% WER result on this data set.

  3. Automated extraction of radiation dose information for CT examinations.

    PubMed

    Cook, Tessa S; Zimmerman, Stefan; Maidment, Andrew D A; Kim, Woojin; Boonn, William W

    2010-11-01

    Exposure to radiation as a result of medical imaging is currently in the spotlight, receiving attention from Congress as well as the lay press. Although scanner manufacturers are moving toward including effective dose information in the Digital Imaging and Communications in Medicine headers of imaging studies, there is a vast repository of retrospective CT data at every imaging center that stores dose information in an image-based dose sheet. As such, it is difficult for imaging centers to participate in the ACR's Dose Index Registry. The authors have designed an automated extraction system to query their PACS archive and parse CT examinations to extract the dose information stored in each dose sheet. First, an open-source optical character recognition program processes each dose sheet and converts the information to American Standard Code for Information Interchange (ASCII) text. Each text file is parsed, and radiation dose information is extracted and stored in a database which can be queried using an existing pathology and radiology enterprise search tool. Using this automated extraction pipeline, it is possible to perform dose analysis on the >800,000 CT examinations in the PACS archive and generate dose reports for all of these patients. It is also possible to more effectively educate technologists, radiologists, and referring physicians about exposure to radiation from CT by generating report cards for interpreted and performed studies. The automated extraction pipeline enables compliance with the ACR's reporting guidelines and greater awareness of radiation dose to patients, thus resulting in improved patient care and management. Copyright © 2010 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  4. STS Case Study Development Support

    NASA Technical Reports Server (NTRS)

    Rosa de Jesus, Dan A.; Johnson, Grace K.

    2013-01-01

    The Shuttle Case Study Collection (SCSC) has been developed using lessons learned documented by NASA engineers, analysts, and contractors. The SCSC provides educators with a new tool to teach real-world engineering processes with the goal of providing unique educational materials that enhance critical thinking, decision-making and problem-solving skills. During this third phase of the project, responsibilities included: the revision of the Hyper Text Markup Language (HTML) source code to ensure all pages follow World Wide Web Consortium (W3C) standards, and the addition and edition of website content, including text, documents, and images. Basic HTML knowledge was required, as was basic knowledge of photo editing software, and training to learn how to use NASA's Content Management System for website design. The outcome of this project was its release to the public.

  5. [Development of a Text-Data Based Learning Tool That Integrates Image Processing and Displaying].

    PubMed

    Shinohara, Hiroyuki; Hashimoto, Takeyuki

    2015-01-01

    We developed a text-data based learning tool that integrates image processing and displaying by Excel. Knowledge required for programing this tool is limited to using absolute, relative, and composite cell references and learning approximately 20 mathematical functions available in Excel. The new tool is capable of resolution translation, geometric transformation, spatial-filter processing, Radon transform, Fourier transform, convolutions, correlations, deconvolutions, wavelet transform, mutual information, and simulation of proton density-, T1-, and T2-weighted MR images. The processed images of 128 x 128 pixels or 256 x 256 pixels are observed directly within Excel worksheets without using any particular image display software. The results of image processing using this tool were compared with those using C language and the new tool was judged to have sufficient accuracy to be practically useful. The images displayed on Excel worksheets were compared with images using binary-data display software. This comparison indicated that the image quality of the Excel worksheets was nearly equal to the latter in visual impressions. Since image processing is performed by using text-data, the process is visible and facilitates making contrasts by using mathematical equations within the program. We concluded that the newly developed tool is adequate as a computer-assisted learning tool for use in medical image processing.

  6. Metadata requirements for results of diagnostic imaging procedures: a BIIF profile to support user applications

    NASA Astrophysics Data System (ADS)

    Brown, Nicholas J.; Lloyd, David S.; Reynolds, Melvin I.; Plummer, David L.

    2002-05-01

    A visible digital image is rendered from a set of digital image data. Medical digital image data can be stored as either: (a) pre-rendered format, corresponding to a photographic print, or (b) un-rendered format, corresponding to a photographic negative. The appropriate image data storage format and associated header data (metadata) required by a user of the results of a diagnostic procedure recorded electronically depends on the task(s) to be performed. The DICOM standard provides a rich set of metadata that supports the needs of complex applications. Many end user applications, such as simple report text viewing and display of a selected image, are not so demanding and generic image formats such as JPEG are sometimes used. However, these are lacking some basic identification requirements. In this paper we make specific proposals for minimal extensions to generic image metadata of value in various domains, which enable safe use in the case of two simple healthcare end user scenarios: (a) viewing of text and a selected JPEG image activated by a hyperlink and (b) viewing of one or more JPEG images together with superimposed text and graphics annotation using a file specified by a profile of the ISO/IEC Basic Image Interchange Format (BIIF).

  7. Prostate and Urologic Cancer | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"183","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Prostate and Urologic Cancer Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Prostate and Urologic Cancer Research Group Homepage

  8. Influence of graphic format on comprehension of risk information among American Indians.

    PubMed

    Sprague, Debra; LaVallie, Donna L; Wolf, Fredric M; Jacobsen, Clemma; Sayson, Kirsten; Buchwald, Dedra

    2011-01-01

    Presentation of risk information influences patients' ability to interpret health care options. Little is known about this relationship between risk presentation and interpretation among American Indians. Three hundred American Indian employees on a western American Indian reservation were invited to complete an anonymous written survey. All surveys included a vignette presenting baseline risk information about a hypothetical cancer and possible benefits of 2 prevention plans. Risk interpretation was assessed by correct answers to 3 questions evaluating the risk reduction associated with the plans. Numeric information was the same in all surveys, but framing varied; half expressed prevention benefits in terms of relative risk reduction and half in terms of absolute risk reduction. All surveys used text to describe the benefits of the 2 plans, but half included a graphic image. Surveys were distributed randomly. Responses were analyzed using binary logistic regression with the robust variance estimator to account for clustering of outcomes within participant. Use of a graphic image was associated with higher odds of correctly answering 3 risk interpretation questions (odds ratio = 2.5, 95% confidence interval = 1.5-4.0, P < 0.001) compared to the text-only format. These findings were similar to those of previous studies carried out in the general population. Neither framing information as relative compared to absolute risk nor the interaction between graphic image and relative risk presentation was associated with risk interpretation. One type of graphic image was associated with increased understanding of risk in a small sample of American Indian adults. The authors recommend further investigation of the effectiveness of other types of graphic displays for conveying health risk information to this population.

  9. Representations of Codeine Misuse on Instagram: Content Analysis

    PubMed Central

    Cherian, Roy; Westbrook, Marisa; Ramo, Danielle

    2018-01-01

    Background Prescription opioid misuse has doubled over the past 10 years and is now a public health epidemic. Analysis of social media data may provide additional insights into opioid misuse to supplement the traditional approaches of data collection (eg, self-report on surveys). Objective The aim of this study was to characterize representations of codeine misuse through analysis of public posts on Instagram to understand text phrases related to misuse. Methods We identified hashtags and searchable text phrases associated with codeine misuse by analyzing 1156 sequential Instagram posts over the course of 2 weeks from May 2016 to July 2016. Content analysis of posts associated with these hashtags identified the most common themes arising in images, as well as culture around misuse, including how misuse is happening and being perpetuated through social media. Results A majority of images (50/100; 50.0%) depicted codeine in its commonly misused form, combined with soda (lean). Codeine misuse was commonly represented with the ingestion of alcohol, cannabis, and benzodiazepines. Some images highlighted the previously noted affinity between codeine misuse and hip-hop culture or mainstream popular culture images. Conclusions The prevalence of codeine misuse images, glamorizing of ingestion with soda and alcohol, and their integration with mainstream, popular culture imagery holds the potential to normalize and increase codeine misuse and overdose. To reduce harm and prevent misuse, immediate public health efforts are needed to better understand the relationship between the potential normalization, ritualization, and commercialization of codeine misuse. PMID:29559422

  10. Age and automation interact to influence performance of a simulated luggage screening task.

    PubMed

    Wiegmann, Douglas; McCarley, Jason S; Kramer, Arthur F; Wickens, Christopher D

    2006-08-01

    An experiment examined the impact of automation on young and old adults' abilities to detect threat objects in a simulated baggage-screening task. Younger and older adult participants viewed X-ray images of cluttered baggage, 20% of which contained a hidden knife. Some participants were provided an automated aid with a hit rate of 0.90 and a false alarm rate of 0.25. The aid provided assistance to participants in one of three forms: a text message that appeared before the stimulus image; a text message that appeared following the stimulus image; or a spatial cue concurrent with the stimulus image. Control participants performed the task with no assistance from an aid. Spatial cuing improved performance for both age groups. Text cuing improved young adults' performance, but had no benefit for older participants. Effects were similar whether the text cue preceded or followed the search stimulus itself. Results indicate that spatial cuing rather than text alerts may be more effective in aiding performance during a baggage screening task and such benefits are likely to occur regardless of operator age.

  11. The use of communication technology in medicine

    NASA Technical Reports Server (NTRS)

    Reis, Howard P.

    1991-01-01

    NYNEX Science and Technology is engineering a multi-layered approach to multimedia communications by combining high-resolution images, video, voice, and text into a new fiber-optic service. The service, Media Broadband Service (MBS), is a network-based visual communications capability. It permits real time sharing of images in support of collaborative work among geographically dispersed locations. The health care industry was identified as a primary target market due to their need for high resolution images, the need to transport these images over great distances, and the need to achieve the transport in a short amount of time. The NYNEX Corporation, the current state of the MBS project, including the market needs driving the development of MBS, the overall design of the service, its current implementation and development status, and the progress of MBS projects underway for various customers participating in the initial service offering are described.

  12. Using Cross-Sectional Imaging to Convey Organ Relationships: An Integrated Learning Environment for Students of Gross Anatomy

    PubMed Central

    Forman, Bruce H.; Eccles, Randy; Piggins, Judith; Raila, Wayne; Estey, Greg; Barnett, G. Octo

    1990-01-01

    We have developed a visually oriented, computer-controlled learning environment designed for use by students of gross anatomy. The goals of this module are to reinforce the concepts of organ relationships and topography by using computed axial tomographic (CAT) images accessed from a videodisc integrated with color graphics and to introduce students to cross-sectional radiographic anatomy. We chose to build the program around CAT scan images because they not only provide excellent structural detail but also offer an anatomic orientation (transverse) that complements that used in the dissection laboratory (basically a layer-by-layer, anterior-to-posterior, or coronal approach). Our system, built using a Microsoft Windows-386 based authoring environment which we designed and implemented, integrates text, video images, and graphics into a single screen display. The program allows both user browsing of information, facilitated by hypertext links, and didactic sessions including mini-quizzes for self-assessment.

  13. Evaluation of user input methods for manipulating a tablet personal computer in sterile techniques.

    PubMed

    Yamada, Akira; Komatsu, Daisuke; Suzuki, Takeshi; Kurozumi, Masahiro; Fujinaga, Yasunari; Ueda, Kazuhiko; Kadoya, Masumi

    2017-02-01

    To determine a quick and accurate user input method for manipulating tablet personal computers (PCs) in sterile techniques. We evaluated three different manipulation methods, (1) Computer mouse and sterile system drape, (2) Fingers and sterile system drape, and (3) Digitizer stylus and sterile ultrasound probe cover with a pinhole, in terms of the central processing unit (CPU) performance, manipulation performance, and contactlessness. A significant decrease in CPU score ([Formula: see text]) and an increase in CPU temperature ([Formula: see text]) were observed when a system drape was used. The respective mean times taken to select a target image from an image series (ST) and the mean times for measuring points on an image (MT) were [Formula: see text] and [Formula: see text] s for the computer mouse method, [Formula: see text] and [Formula: see text] s for the finger method, and [Formula: see text] and [Formula: see text] s for the digitizer stylus method, respectively. The ST for the finger method was significantly longer than for the digitizer stylus method ([Formula: see text]). The MT for the computer mouse method was significantly longer than for the digitizer stylus method ([Formula: see text]). The mean success rate for measuring points on an image was significantly lower for the finger method when the diameter of the target was equal to or smaller than 8 mm than for the other methods. No significant difference in the adenosine triphosphate amount at the surface of the tablet PC was observed before, during, or after manipulation via the digitizer stylus method while wearing starch-powdered sterile gloves ([Formula: see text]). Quick and accurate manipulation of tablet PCs in sterile techniques without CPU load is feasible using a digitizer stylus and sterile ultrasound probe cover with a pinhole.

  14. Pathologists dislike sound? Evaluation of a computerised training microscope.

    PubMed Central

    Gray, E; Duvall, E; Sprey, J; Bird, C C

    1998-01-01

    AIM: To evaluate the use of multimedia enhancements, using a computerised microscope, in the training of microscope skills. METHODS: The HOME microscope provides facilities to highlight features of interest in conjunction with either text display or aural presentation. A pilot study was carried out with 10 individuals, eight of whom were at different stages of pathology training. A tutorial was implemented employing sound or text, and each individual tested each version. Both the subjective impressions of users and objective measurement of their patterns of use were recorded. RESULTS: Although both versions improved learning, users took longer to work through the aural than the text version; 90% of users preferred the text only version, including all eight individuals involved in pathology training. CONCLUSIONS: Pathologists appear to prefer visual rather than aural input when using teaching systems such as the HOME microscope and sound does not give added value to the training experience. Images PMID:9659250

  15. Fiber orientation measurements by diffusion tensor imaging improve hydrogen-1 magnetic resonance spectroscopy of intramyocellular lipids in human leg muscles.

    PubMed

    Valaparla, Sunil K; Gao, Feng; Daniele, Giuseppe; Abdul-Ghani, Muhammad; Clarke, Geoffrey D

    2015-04-01

    Twelve healthy subjects underwent hydrogen-1 magnetic resonance spectroscopy ([Formula: see text]) acquisition ([Formula: see text]), diffusion tensor imaging (DTI) with a [Formula: see text]-value of [Formula: see text], and fat-water magnetic resonance imaging (MRI) using the Dixon method. Subject-specific muscle fiber orientation, derived from DTI, was used to estimate the lipid proton spectral chemical shift. Pennation angles were measured as 23.78 deg in vastus lateralis (VL), 17.06 deg in soleus (SO), and 8.49 deg in tibialis anterior (TA) resulting in a chemical shift between extramyocellular lipids (EMCL) and intramyocellular lipids (IMCL) of 0.15, 0.17, and 0.19 ppm, respectively. IMCL concentrations were [Formula: see text], [Formula: see text], and [Formula: see text] in SO, VL, and TA, respectively. Significant differences were observed in IMCL and EMCL pairwise comparisons in SO, VL, and TA ([Formula: see text]). Strong correlations were observed between total fat fractions from [Formula: see text] and Dixon MRI for VL ([Formula: see text]), SO ([Formula: see text]), and TA ([Formula: see text]). Bland-Altman analysis between fat fractions (FFMRS and FFMRI) showed good agreement with small limits of agreement (LoA): [Formula: see text] (LoA: [Formula: see text] to 0.69%) in VL, [Formula: see text] (LoA: [Formula: see text] to 1.33%) in SO, and [Formula: see text] (LoA: [Formula: see text] to 0.47%) in TA. The results of this study demonstrate the variation in muscle fiber orientation and lipid concentrations in these three skeletal muscle types.

  16. Software for Analyzing Sequences of Flow-Related Images

    NASA Technical Reports Server (NTRS)

    Klimek, Robert; Wright, Ted

    2004-01-01

    Spotlight is a computer program for analysis of sequences of images generated in combustion and fluid physics experiments. Spotlight can perform analysis of a single image in an interactive mode or a sequence of images in an automated fashion. The primary type of analysis is tracking of positions of objects over sequences of frames. Features and objects that are typically tracked include flame fronts, particles, droplets, and fluid interfaces. Spotlight automates the analysis of object parameters, such as centroid position, velocity, acceleration, size, shape, intensity, and color. Images can be processed to enhance them before statistical and measurement operations are performed. An unlimited number of objects can be analyzed simultaneously. Spotlight saves results of analyses in a text file that can be exported to other programs for graphing or further analysis. Spotlight is a graphical-user-interface-based program that at present can be executed on Microsoft Windows and Linux operating systems. A version that runs on Macintosh computers is being considered.

  17. Perceptual approaches to finding features in data

    NASA Astrophysics Data System (ADS)

    Rogowitz, Bernice E.

    2013-03-01

    Electronic imaging applications hinge on the ability to discover features in data. For example, doctors examine diagnostic images for tumors, broken bones and changes in metabolic activity. Financial analysts explore visualizations of market data to find correlations, outliers and interaction effects. Seismologists look for signatures in geological data to tell them where to drill or where an earthquake may begin. These data are very diverse, including images, numbers, graphs, 3-D graphics, and text, and are growing exponentially, largely through the rise in automatic data collection technologies such as sensors and digital imaging. This paper explores important trends in the art and science of finding features in data, such as the tension between bottom-up and top-down processing, the semantics of features, and the integration of human- and algorithm-based approaches. This story is told from the perspective of the IS and T/SPIE Conference on Human Vision and Electronic Imaging (HVEI), which has fostered research at the intersection between human perception and the evolution of new technologies.

  18. Digital mammographic tumor classification using transfer learning from deep convolutional neural networks.

    PubMed

    Huynh, Benjamin Q; Li, Hui; Giger, Maryellen L

    2016-07-01

    Convolutional neural networks (CNNs) show potential for computer-aided diagnosis (CADx) by learning features directly from the image data instead of using analytically extracted features. However, CNNs are difficult to train from scratch for medical images due to small sample sizes and variations in tumor presentations. Instead, transfer learning can be used to extract tumor information from medical images via CNNs originally pretrained for nonmedical tasks, alleviating the need for large datasets. Our database includes 219 breast lesions (607 full-field digital mammographic images). We compared support vector machine classifiers based on the CNN-extracted image features and our prior computer-extracted tumor features in the task of distinguishing between benign and malignant breast lesions. Five-fold cross validation (by lesion) was conducted with the area under the receiver operating characteristic (ROC) curve as the performance metric. Results show that classifiers based on CNN-extracted features (with transfer learning) perform comparably to those using analytically extracted features [area under the ROC curve [Formula: see text

  19. Segmentation-driven compound document coding based on H.264/AVC-INTRA.

    PubMed

    Zaghetto, Alexandre; de Queiroz, Ricardo L

    2007-07-01

    In this paper, we explore H.264/AVC operating in intraframe mode to compress a mixed image, i.e., composed of text, graphics, and pictures. Even though mixed contents (compound) documents usually require the use of multiple compressors, we apply a single compressor for both text and pictures. For that, distortion is taken into account differently between text and picture regions. Our approach is to use a segmentation-driven adaptation strategy to change the H.264/AVC quantization parameter on a macroblock by macroblock basis, i.e., we deviate bits from pictorial regions to text in order to keep text edges sharp. We show results of a segmentation driven quantizer adaptation method applied to compress documents. Our reconstructed images have better text sharpness compared to straight unadapted coding, at negligible visual losses on pictorial regions. Our results also highlight the fact that H.264/AVC-INTRA outperforms coders such as JPEG-2000 as a single coder for compound images.

  20. Stimulus Modality and Smoking Behavior: Moderating Role of Implicit Attitudes.

    PubMed

    Ezeh, Valentine C; Mefoh, Philip

    2015-07-20

    This study investigated whether stimulus modality influences smoking behavior among smokers in South Eastern Nigeria and also whether implicit attitudes moderate the relationship between stimulus modality and smoking behavior. 60 undergraduate students of University of Nigeria, Nsukka were used. Participants were individually administered the IAT task as a measure of implicit attitude toward smoking and randomly assigned into either image condition that paired images of cigarette with aversive images of potential health consequences or text condition that paired images of cigarette with aversive texts of potential health consequences. A one- predictor and one-moderator binary logistic analysis indicates that stimulus modality significantly predicts smoking behavior (p = < .05) with those in the image condition choosing not to smoke with greater probability than the text condition. The interaction between stimulus modality and IAT scores was also significant (p = < .05). Specifically, the modality effect was larger for participants in the image group who held more negative implicit attitudes towards smoking. The finding shows the urgent need to introduce the use of aversive images of potential health consequences on cigarette packs in Nigeria.

  1. Improving the interactivity and functionality of Web-based radiology teaching files with the Java programming language.

    PubMed

    Eng, J

    1997-01-01

    Java is a programming language that runs on a "virtual machine" built into World Wide Web (WWW)-browsing programs on multiple hardware platforms. Web pages were developed with Java to enable Web-browsing programs to overlay transparent graphics and text on displayed images so that the user could control the display of labels and annotations on the images, a key feature not available with standard Web pages. This feature was extended to include the presentation of normal radiologic anatomy. Java programming was also used to make Web browsers compatible with the Digital Imaging and Communications in Medicine (DICOM) file format. By enhancing the functionality of Web pages, Java technology should provide greater incentive for using a Web-based approach in the development of radiology teaching material.

  2. A novel Iterative algorithm to text segmentation for web born-digital images

    NASA Astrophysics Data System (ADS)

    Xu, Zhigang; Zhu, Yuesheng; Sun, Ziqiang; Liu, Zhen

    2015-07-01

    Since web born-digital images have low resolution and dense text atoms, text region over-merging and miss detection are still two open issues to be addressed. In this paper a novel iterative algorithm is proposed to locate and segment text regions. In each iteration, the candidate text regions are generated by detecting Maximally Stable Extremal Region (MSER) with diminishing thresholds, and categorized into different groups based on a new similarity graph, and the texted region groups are identified by applying several features and rules. With our proposed overlap checking method the final well-segmented text regions are selected from these groups in all iterations. Experiments have been carried out on the web born-digital image datasets used for robust reading competition in ICDAR 2011 and 2013, and the results demonstrate that our proposed scheme can significantly reduce both the number of over-merge regions and the lost rate of target atoms, and the overall performance outperforms the best compared with the methods shown in the two competitions in term of recall rate and f-score at the cost of slightly higher computational complexity.

  3. Supervised learning technique for the automated identification of white matter hyperintensities in traumatic brain injury.

    PubMed

    Stone, James R; Wilde, Elisabeth A; Taylor, Brian A; Tate, David F; Levin, Harvey; Bigler, Erin D; Scheibel, Randall S; Newsome, Mary R; Mayer, Andrew R; Abildskov, Tracy; Black, Garrett M; Lennon, Michael J; York, Gerald E; Agarwal, Rajan; DeVillasante, Jorge; Ritter, John L; Walker, Peter B; Ahlers, Stephen T; Tustison, Nicholas J

    2016-01-01

    White matter hyperintensities (WMHs) are foci of abnormal signal intensity in white matter regions seen with magnetic resonance imaging (MRI). WMHs are associated with normal ageing and have shown prognostic value in neurological conditions such as traumatic brain injury (TBI). The impracticality of manually quantifying these lesions limits their clinical utility and motivates the utilization of machine learning techniques for automated segmentation workflows. This study develops a concatenated random forest framework with image features for segmenting WMHs in a TBI cohort. The framework is built upon the Advanced Normalization Tools (ANTs) and ANTsR toolkits. MR (3D FLAIR, T2- and T1-weighted) images from 24 service members and veterans scanned in the Chronic Effects of Neurotrauma Consortium's (CENC) observational study were acquired. Manual annotations were employed for both training and evaluation using a leave-one-out strategy. Performance measures include sensitivity, positive predictive value, [Formula: see text] score and relative volume difference. Final average results were: sensitivity = 0.68 ± 0.38, positive predictive value = 0.51 ± 0.40, [Formula: see text] = 0.52 ± 0.36, relative volume difference = 43 ± 26%. In addition, three lesion size ranges are selected to illustrate the variation in performance with lesion size. Paired with correlative outcome data, supervised learning methods may allow for identification of imaging features predictive of diagnosis and prognosis in individual TBI patients.

  4. A low-cost tracked C-arm (TC-arm) upgrade system for versatile quantitative intraoperative imaging.

    PubMed

    Amiri, Shahram; Wilson, David R; Masri, Bassam A; Anglin, Carolyn

    2014-07-01

    C-arm fluoroscopy is frequently used in clinical applications as a low-cost and mobile real-time qualitative assessment tool. C-arms, however, are not widely accepted for applications involving quantitative assessments, mainly due to the lack of reliable and low-cost position tracking methods, as well as adequate calibration and registration techniques. The solution suggested in this work is a tracked C-arm (TC-arm) which employs a low-cost sensor tracking module that can be retrofitted to any conventional C-arm for tracking the individual joints of the device. Registration and offline calibration methods were developed that allow accurate tracking of the gantry and determination of the exact intrinsic and extrinsic parameters of the imaging system for any acquired fluoroscopic image. The performance of the system was evaluated in comparison to an Optotrak[Formula: see text] motion tracking system and by a series of experiments on accurately built ball-bearing phantoms. Accuracies of the system were determined for 2D-3D registration, three-dimensional landmark localization, and for generating panoramic stitched views in simulated intraoperative applications. The system was able to track the center point of the gantry with an accuracy of [Formula: see text] mm or better. Accuracies of 2D-3D registrations were [Formula: see text] mm and [Formula: see text]. Three-dimensional landmark localization had an accuracy of [Formula: see text] of the length (or [Formula: see text] mm) on average, depending on whether the landmarks were located along, above, or across the table. The overall accuracies of the two-dimensional measurements conducted on stitched panoramic images of the femur and lumbar spine were 2.5 [Formula: see text] 2.0 % [Formula: see text] and [Formula: see text], respectively. The TC-arm system has the potential to achieve sophisticated quantitative fluoroscopy assessment capabilities using an existing C-arm imaging system. This technology may be useful to improve the quality of orthopedic surgery and interventional radiology.

  5. Two-dimensional multi-frequency imaging of a tumor inclusion in a homogeneous breast phantom using the harmonic motion Doppler imaging method.

    PubMed

    Tafreshi, Azadeh Kamali; Top, Can Barış; Gençer, Nevzat Güneri

    2017-06-21

    Harmonic motion microwave Doppler imaging (HMMDI) is a novel imaging modality for imaging the coupled electrical and mechanical properties of body tissues. In this paper, we used two experimental systems with different receiver configurations to obtain HMMDI images from tissue-mimicking phantoms at multiple vibration frequencies between 15 Hz and 35 Hz. In the first system, we used a spectrum analyzer to obtain the Doppler data in the frequency domain, while in the second one, we used a homodyne receiver that was designed to acquire time-domain data. The developed phantoms mimicked the elastic and dielectric properties of breast fat tissue, and included a [Formula: see text] mm cylindrical inclusion representing the tumor. A focused ultrasound probe was mechanically scanned in two lateral dimensions to obtain two-dimensional HMMDI images of the phantoms. The inclusions were resolved inside the fat phantom using both experimental setups. The image resolution increased with increasing vibration frequency. The designed receiver showed higher sensitivity than the spectrum analyzer measurements. The results also showed that time-domain data acquisition should be used to fully exploit the potential of the HMMDI method.

  6. Two-dimensional multi-frequency imaging of a tumor inclusion in a homogeneous breast phantom using the harmonic motion Doppler imaging method

    NASA Astrophysics Data System (ADS)

    Kamali Tafreshi, Azadeh; Barış Top, Can; Güneri Gençer, Nevzat

    2017-06-01

    Harmonic motion microwave Doppler imaging (HMMDI) is a novel imaging modality for imaging the coupled electrical and mechanical properties of body tissues. In this paper, we used two experimental systems with different receiver configurations to obtain HMMDI images from tissue-mimicking phantoms at multiple vibration frequencies between 15 Hz and 35 Hz. In the first system, we used a spectrum analyzer to obtain the Doppler data in the frequency domain, while in the second one, we used a homodyne receiver that was designed to acquire time-domain data. The developed phantoms mimicked the elastic and dielectric properties of breast fat tissue, and included a 14~\\text{mm}× 9 mm cylindrical inclusion representing the tumor. A focused ultrasound probe was mechanically scanned in two lateral dimensions to obtain two-dimensional HMMDI images of the phantoms. The inclusions were resolved inside the fat phantom using both experimental setups. The image resolution increased with increasing vibration frequency. The designed receiver showed higher sensitivity than the spectrum analyzer measurements. The results also showed that time-domain data acquisition should be used to fully exploit the potential of the HMMDI method.

  7. Technology and Technique Standards for Camera-Acquired Digital Dermatologic Images: A Systematic Review.

    PubMed

    Quigley, Elizabeth A; Tokay, Barbara A; Jewell, Sarah T; Marchetti, Michael A; Halpern, Allan C

    2015-08-01

    Photographs are invaluable dermatologic diagnostic, management, research, teaching, and documentation tools. Digital Imaging and Communications in Medicine (DICOM) standards exist for many types of digital medical images, but there are no DICOM standards for camera-acquired dermatologic images to date. To identify and describe existing or proposed technology and technique standards for camera-acquired dermatologic images in the scientific literature. Systematic searches of the PubMed, EMBASE, and Cochrane databases were performed in January 2013 using photography and digital imaging, standardization, and medical specialty and medical illustration search terms and augmented by a gray literature search of 14 websites using Google. Two reviewers independently screened titles of 7371 unique publications, followed by 3 sequential full-text reviews, leading to the selection of 49 publications with the most recent (1985-2013) or detailed description of technology or technique standards related to the acquisition or use of images of skin disease (or related conditions). No universally accepted existing technology or technique standards for camera-based digital images in dermatology were identified. Recommendations are summarized for technology imaging standards, including spatial resolution, color resolution, reproduction (magnification) ratios, postacquisition image processing, color calibration, compression, output, archiving and storage, and security during storage and transmission. Recommendations are also summarized for technique imaging standards, including environmental conditions (lighting, background, and camera position), patient pose and standard view sets, and patient consent, privacy, and confidentiality. Proposed standards for specific-use cases in total body photography, teledermatology, and dermoscopy are described. The literature is replete with descriptions of obtaining photographs of skin disease, but universal imaging standards have not been developed, validated, and adopted to date. Dermatologic imaging is evolving without defined standards for camera-acquired images, leading to variable image quality and limited exchangeability. The development and adoption of universal technology and technique standards may first emerge in scenarios when image use is most associated with a defined clinical benefit.

  8. Measurement of smaller colon polyp in CT colonography images using morphological image processing.

    PubMed

    Manjunath, K N; Siddalingaswamy, P C; Prabhu, G K

    2017-11-01

    Automated measurement of the size and shape of colon polyps is one of the challenges in Computed tomography colonography (CTC). The objective of this retrospective study was to improve the sensitivity and specificity of smaller polyp measurement in CTC using image processing techniques. A domain knowledge-based method has been implemented with hybrid method of colon segmentation, morphological image processing operators for detecting the colonic structures, and the decision-making system for delineating the smaller polyp-based on a priori knowledge. The method was applied on 45 CTC dataset. The key finding was that the smaller polyps were accurately measured. In addition to 6-9 mm range, polyps of even <5 mm were also detected. The results were validated qualitatively and quantitatively using both 2D MPR and 3D view. Implementation was done on a high-performance computer with parallel processing. It takes [Formula: see text] min for measuring the smaller polyp in a dataset of 500 CTC images. With this method, [Formula: see text] and [Formula: see text] were achieved. The domain-based approach with morphological image processing has given good results. The smaller polyps were measured accurately which helps in making right clinical decisions. Qualitatively and quantitatively the results were acceptable when compared to the ground truth at [Formula: see text].

  9. The commercialization of robotic surgery: unsubstantiated marketing of gynecologic surgery by hospitals.

    PubMed

    Schiavone, Maria B; Kuo, Eugenia C; Naumann, R Wendel; Burke, William M; Lewin, Sharyn N; Neugut, Alfred I; Hershman, Dawn L; Herzog, Thomas J; Wright, Jason D

    2012-09-01

    We analyzed the content, quality, and accuracy of information provided on hospital web sites about robotic gynecologic surgery. An analysis of hospitals with more than 200 beds from a selection of states was performed. Hospital web sites were analyzed for the content and quality of data regarding robotic-assisted surgery. Among 432 hospitals, the web sites of 192 (44.4%) contained marketing for robotic gynecologic surgery. Stock images (64.1%) and text (24.0%) derived from the robot manufacturer were frequent. Although most sites reported improved perioperative outcomes, limitations of robotics including cost, complications, and operative time were discussed only 3.7%, 1.6%, and 3.7% of the time, respectively. Only 47.9% of the web sites described a comparison group. Marketing of robotic gynecologic surgery is widespread. Much of the content is not based on high-quality data, fails to present alternative procedures, and relies on stock text and images. Copyright © 2012 Mosby, Inc. All rights reserved.

  10. Strong is the new skinny: A content analysis of fitspiration websites.

    PubMed

    Boepple, Leah; Ata, Rheanna N; Rum, Ruba; Thompson, J Kevin

    2016-06-01

    "Fitspiration" websites are media that aim to inspire people to live healthy and fit lifestyles through motivating images and text related to exercise and diet. Given the link between similar Internet content (i.e., healthy living blogs) and problematic messages, we hypothesized that content on these sites would over-emphasize appearance and promote problematic messages regarding exercise and diet. Keywords "fitspo" and "fitspiration" were entered into search engines. The first 10 images and text from 51 individual websites were rated on a variety of characteristics. Results indicated that a majority of messages found on fitspiration websites focused on appearance. Other common themes included content promoting exercise for appearance-motivated reasons and content promoting dietary restraint. "Fitspiration" websites are a source of messages that reinforce over-valuation of physical appearance, eating concerns, and excessive exercise. Further research is needed to examine the impact viewing such content has on participants' psychological health. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Design and realization of the compound text-based test questions library management system

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Feng, Lin; Zhao, Xin

    2011-12-01

    The test questions library management system is the essential part of the on-line examination system. The basic demand for which is to deal with compound text including information like images, formulae and create the corresponding Word documents. Having compared with the two current solutions of creating documents, this paper presents a design proposal of Word Automation mechanism based on OLE/COM technology, and discusses the way of Word Automation application in detail and at last provides the operating results of the system which have high reference value in improving the generated efficiency of project documents and report forms.

  12. Development and Analysis of New 3D Tactile Materials for the Enhancement of STEM Education for the Blind and Visually Impaired

    NASA Astrophysics Data System (ADS)

    Gonzales, Ashleigh

    Blind and visually impaired individuals have historically demonstrated a low participation in the fields of science, engineering, mathematics, and technology (STEM). This low participation is reflected in both their education and career choices. Despite the establishment of the Americans with Disabilities Act (ADA) and the Individuals with Disabilities Education Act (IDEA), blind and visually impaired (BVI) students continue to academically fall below the level of their sighted peers in the areas of science and math. Although this deficit is created by many factors, this study focuses on the lack of adequate accessible image based materials. Traditional methods for creating accessible image materials for the vision impaired have included detailed verbal descriptions accompanying an image or conversion into a simplified tactile graphic. It is very common that no substitute materials will be provided to students within STEM courses because they are image rich disciplines and often include a large number images, diagrams and charts. Additionally, images that are translated into text or simplified into basic line drawings are frequently inadequate because they rely on the interpretations of resource personnel who do not have expertise in STEM. Within this study, a method to create a new type of tactile 3D image was developed using High Density Polyethylene (HDPE) and Computer Numeric Control (CNC) milling. These tactile image boards preserve high levels of detail when compared to the original print image. To determine the discernibility and effectiveness of tactile images, these customizable boards were tested in various university classrooms as well as in participation studies which included BVI and sighted students. Results from these studies indicate that tactile images are discernable and were found to improve performance in lab exercises as much as 60% for those with visual impairment. Incorporating tactile HDPE 3D images into a classroom setting was shown to increase the interest, participation and performance of BVI students suggesting that this type of 3D tactile image should be incorporated into STEM classes to increase the participation of these students and improve the level of training they receive in science and math.

  13. Planetary image conversion task

    NASA Technical Reports Server (NTRS)

    Martin, M. D.; Stanley, C. L.; Laughlin, G.

    1985-01-01

    The Planetary Image Conversion Task group processed 12,500 magnetic tapes containing raw imaging data from JPL planetary missions and produced an image data base in consistent format on 1200 fully packed 6250-bpi tapes. The output tapes will remain at JPL. A copy of the entire tape set was delivered to US Geological Survey, Flagstaff, Ariz. A secondary task converted computer datalogs, which had been stored in project specific MARK IV File Management System data types and structures, to flat-file, text format that is processable on any modern computer system. The conversion processing took place at JPL's Image Processing Laboratory on an IBM 370-158 with existing software modified slightly to meet the needs of the conversion task. More than 99% of the original digital image data was successfully recovered by the conversion task. However, processing data tapes recorded before 1975 was destructive. This discovery is of critical importance to facilities responsible for maintaining digital archives since normal periodic random sampling techniques would be unlikely to detect this phenomenon, and entire data sets could be wiped out in the act of generating seemingly positive sampling results. Reccomended follow-on activities are also included.

  14. Low-Speed Fingerprint Image Capture System User`s Guide, June 1, 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitus, B.R.; Goddard, J.S.; Jatko, W.B.

    1993-06-01

    The Low-Speed Fingerprint Image Capture System (LS-FICS) uses a Sun workstation controlling a Lenzar ElectroOptics Opacity 1000 imaging system to digitize fingerprint card images to support the Federal Bureau of Investigation`s (FBI`s) Automated Fingerprint Identification System (AFIS) program. The system also supports the operations performed by the Oak Ridge National Laboratory- (ORNL-) developed Image Transmission Network (ITN) prototype card scanning system. The input to the system is a single FBI fingerprint card of the agreed-upon standard format and a user-specified identification number. The output is a file formatted to be compatible with the National Institute of Standards and Technology (NIST)more » draft standard for fingerprint data exchange dated June 10, 1992. These NIST compatible files contain the required print and text images. The LS-FICS is designed to provide the FBI with the capability of scanning fingerprint cards into a digital format. The FBI will replicate the system to generate a data base of test images. The Host Workstation contains the image data paths and the compression algorithm. A local area network interface, disk storage, and tape drive are used for the image storage and retrieval, and the Lenzar Opacity 1000 scanner is used to acquire the image. The scanner is capable of resolving 500 pixels/in. in both x and y directions. The print images are maintained in full 8-bit gray scale and compressed with an FBI-approved wavelet-based compression algorithm. The text fields are downsampled to 250 pixels/in. and 2-bit gray scale. The text images are then compressed using a lossless Huffman coding scheme. The text fields retrieved from the output files are easily interpreted when displayed on the screen. Detailed procedures are provided for system calibration and operation. Software tools are provided to verify proper system operation.« less

  15. Intelligent content fitting for digital publishing

    NASA Astrophysics Data System (ADS)

    Lin, Xiaofan

    2006-02-01

    One recurring problem in Variable Data Printing (VDP) is that the existing contents cannot satisfy the VDP task as-is. So there is a strong need for content fitting technologies to support high-value digital publishing applications, in which text and image are the two major types of contents. This paper presents meta-Autocrop framework for image fitting and TextFlex technology for text fitting. The meta-Autocrop framework supports multiple modes: fixed aspect-ratio mode, advice mode, and verification mode. The TextFlex technology supports non-rectangular text wrapping and paragraph-based line breaking. We also demonstrate how these content fitting technologies are utilized in the overall automated composition and layout system.

  16. The World Wide Web--a new tool for biomedical engineering education.

    PubMed

    Blanchard, S M

    1997-01-01

    An ever-increasing variety of materials (text, images, videos, and sound) are available through the World Wide Web (WWW). While textbooks, which are often outdated by the time they are published, are usually limited to black and white text and images, many supplemental materials can be found on the WWW. The WWW also provides many resources for student projects. In BAE 465: Biomedical Engineering Applications, student teams developed WWW-based term projects on biomedical topics, e.g. biomaterials, MRI, and medical ultrasound. After the projects were completed and edited by the instructor, they were placed on-line for world-wide access if permission for this had been granted by the student authors. Projects from three classes have been used to form the basis for an electronic textbook which is available at http:@www.eos.ncsu.edu/bae/research/blanchard /www/465/textbook/. This electronic textbook also includes instructional objectives and sample tests for specific topic areas. Student projects have been linked to the appropriate topic areas within the electronic textbook. Links to relevant sites have been included within the electronic textbook as well as within the individual projects. Students were required to link to images and other materials they wanted to include in their project in order to avoid copyright issues. The drawback to this approach to copyright protection is that addresses can change making links unavailable. In BAE 465 and in BAE 235: Engineering Biology, the WWW has also been used to distribute instructional objectives, the syllabi and class policies, homework problems, and abbreviated lecture notes. This has made maintaining course-related material easier and has reduced the amount of paper used by both the students and the instructor. Goals for the electronic textbook include the addition of instructional simulation programs that can be run from remote sites. In the future, biomedical engineering may be taught in a virtual classroom with participation by an instructor and students from many different parts of the world.

  17. Mathematics of Sensing, Exploitation, and Execution (MSEE) Hierarchical Representations for the Evaluation of Sensed Data

    DTIC Science & Technology

    2016-06-01

    theories of the mammalian visual system, and exploiting descriptive text that may accompany a still image for improved inference. The focus of the Brown...test, computer vision, semantic description , street scenes, belief propagation, generative models, nonlinear filtering, sufficient statistics 16...visual system, and exploiting descriptive text that may accompany a still image for improved inference. The focus of the Brown team was on single images

  18. Effects of Modality and Redundancy Principles on the Learning and Attitude of a Computer-Based Music Theory Lesson among Jordanian Primary Pupils

    ERIC Educational Resources Information Center

    Aldalalah, Osamah Ahmad; Fong, Soon Fook

    2010-01-01

    The purpose of this study was to investigate the effects of modality and redundancy principles on the attitude and learning of music theory among primary pupils of different aptitudes in Jordan. The lesson of music theory was developed in three different modes, audio and image (AI), text with image (TI) and audio with image and text (AIT). The…

  19. A unified framework of image latent feature learning on Sina microblog

    NASA Astrophysics Data System (ADS)

    Wei, Jinjin; Jin, Zhigang; Zhou, Yuan; Zhang, Rui

    2015-10-01

    Large-scale user-contributed images with texts are rapidly increasing on the social media websites, such as Sina microblog. However, the noise and incomplete correspondence between the images and the texts give rise to the difficulty in precise image retrieval and ranking. In this paper, a hypergraph-based learning framework is proposed for image ranking, which simultaneously utilizes visual feature, textual content and social link information to estimate the relevance between images. Representing each image as a vertex in the hypergraph, complex relationship between images can be reflected exactly. Then updating the weight of hyperedges throughout the hypergraph learning process, the effect of different edges can be adaptively modulated in the constructed hypergraph. Furthermore, the popularity degree of the image is employed to re-rank the retrieval results. Comparative experiments on a large-scale Sina microblog data-set demonstrate the effectiveness of the proposed approach.

  20. Cancer Biomarkers | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"175","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Cancer Biomarkers Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Cancer Biomarkers Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Cancer Biomarkers Research Group Homepage Logo","title":"Cancer

  1. Gastrointestinal and Other Cancers | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"181","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Gastrointestinal and Other Cancers Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Gastrointestinal and Other Cancers Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Gastrointestinal and Other

  2. Biometry | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"66","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Biometry Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Biometry Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Biometry Research Group Homepage Logo","title":"Biometry Research Group Homepage

  3. Considering Visual Text Complexity: A Guide for Teachers

    ERIC Educational Resources Information Center

    Cappello, Marva

    2017-01-01

    Twenty-first century literacy requires students to analyze and create images for communication across and within academic disciplines. Thus, literacy teachers are now responsible for supporting students as they engage with visual texts. We must carefully and intentionally choose images for teaching practice and consider the reader, instructional…

  4. An information gathering system for medical image inspection

    NASA Astrophysics Data System (ADS)

    Lee, Young-Jin; Bajcsy, Peter

    2005-04-01

    We present an information gathering system for medical image inspection that consists of software tools for capturing computer-centric and human-centric information. Computer-centric information includes (1) static annotations, such as (a) image drawings enclosing any selected area, a set of areas with similar colors, a set of salient points, and (b) textual descriptions associated with either image drawings or links between pairs of image drawings, and (2) dynamic (or temporal) information, such as mouse movements, zoom level changes, image panning and frame selections from an image stack. Human-centric information is represented by video and audio signals that are acquired by computer-mounted cameras and microphones. The short-term goal of the presented system is to facilitate learning of medical novices from medical experts, while the long-term goal is to data mine all information about image inspection for assisting in making diagnoses. In this work, we built basic software functionality for gathering computer-centric and human-centric information of the aforementioned variables. Next, we developed the information playback capabilities of all gathered information for educational purposes. Finally, we prototyped text-based and image template-based search engines to retrieve information from recorded annotations, for example, (a) find all annotations containing the word "blood vessels", or (b) search for similar areas to a selected image area. The information gathering system for medical image inspection reported here has been tested with images from the Histology Atlas database.

  5. Tumor image signatures and habitats: a processing pipeline of multimodality metabolic and physiological images.

    PubMed

    You, Daekeun; Kim, Michelle M; Aryal, Madhava P; Parmar, Hemant; Piert, Morand; Lawrence, Theodore S; Cao, Yue

    2018-01-01

    To create tumor "habitats" from the "signatures" discovered from multimodality metabolic and physiological images, we developed a framework of a processing pipeline. The processing pipeline consists of six major steps: (1) creating superpixels as a spatial unit in a tumor volume; (2) forming a data matrix [Formula: see text] containing all multimodality image parameters at superpixels; (3) forming and clustering a covariance or correlation matrix [Formula: see text] of the image parameters to discover major image "signatures;" (4) clustering the superpixels and organizing the parameter order of the [Formula: see text] matrix according to the one found in step 3; (5) creating "habitats" in the image space from the superpixels associated with the "signatures;" and (6) pooling and clustering a matrix consisting of correlation coefficients of each pair of image parameters from all patients to discover subgroup patterns of the tumors. The pipeline was applied to a dataset of multimodality images in glioblastoma (GBM) first, which consisted of 10 image parameters. Three major image "signatures" were identified. The three major "habitats" plus their overlaps were created. To test generalizability of the processing pipeline, a second image dataset from GBM, acquired on the scanners different from the first one, was processed. Also, to demonstrate the clinical association of image-defined "signatures" and "habitats," the patterns of recurrence of the patients were analyzed together with image parameters acquired prechemoradiation therapy. An association of the recurrence patterns with image-defined "signatures" and "habitats" was revealed. These image-defined "signatures" and "habitats" can be used to guide stereotactic tissue biopsy for genetic and mutation status analysis and to analyze for prediction of treatment outcomes, e.g., patterns of failure.

  6. Atomic and electronic structures of Si(1 1 1)-(√3 x √3)R30°-Au and (6 × 6)-Au surfaces.

    PubMed

    Patterson, C H

    2015-12-02

    Si(1 1 1)-Au surfaces with around one monolayer of Au exhibit many ordered structures and structures containing disordered domain walls. Hybrid density functional theory (DFT) calculations presented here reveal the origin of these complex structures and tendency to form domain walls. The conjugate honeycomb chain trimer (CHCT) structure of the [Formula: see text]-Au phase contains Si atoms with non-bonding surface states which can bind Au atoms in pairs in interstices of the CHCT structure and make this surface metallic. Si adatoms adsorbed on the [Formula: see text]-Au surface induce a gapped surface through interaction with the non-bonding states. Adsorption of extra Au atoms in interstitial sites of the [Formula: see text]-Au surface is stabilized by interaction with the non-bonding orbitals and leads to higher coverage ordered structures including the [Formula: see text]-Au phase. Extra Au atoms bound in interstitial sites of the [Formula: see text]-Au surface result in top layer Si atoms with an SiAu4 butterfly wing configuration. The structure of a [Formula: see text]-Au phase, whose in-plane top atomic layer positions were previously determined by an electron holography technique (Grozea et al 1998 Surf. Sci. 418 32), is calculated using total energy minimization. The Patterson function for this structure is calculated and is in good agreement with data from an in-plane x-ray diffraction study (Dornisch et al 1991 Phys. Rev. B 44 11221). Filled and empty state scanning tunneling microscopy (STM) images are calculated for domain walls and the [Formula: see text]-Au structure. The [Formula: see text]-Au phase is 2D chiral and this is evident in computed and actual STM images. [Formula: see text]-Au and domain wall structures contain the SiAu4 motif with a butterfly wing shape. Chemical bonding within the Si-Au top layers of the [Formula: see text]-Au and [Formula: see text]-Au surfaces is analyzed and an explanation for the SiAu4 motif structure is given.

  7. The USGS Side-Looking Airborne Radar (SLAR) program: CD-ROMs expand potential for petroleum exploration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kover, A.N.; Schoonmaker, J.W. Jr.; Pohn. H.A.

    1991-03-01

    The United States Geological Survey (USGS) began the systematic collection of Side-Looking Airborne Radar (SLAR) data in 1980. The SLAR image data, useful for many geologic applications including petroleum exploration, are compiled into mosaics using the USGS 1:250,000-scale topographic map series for format and control. Mosaics have been prepared for over 35% of the United States. Image data collected since 1985 are also available as computer compatible tapes (CCTs) for digital analysis. However, the use of tapes is often cumbersome. To make digital data more readily available for use on a microcomputer, the USGS has started to prepare compact discs-readmore » only memory (CD-ROM). Several experimental discs have been compiled to demonstrate the utility of the medium to make available very large data sets. These discs include necessary nonproprietary software text, radar, and other image data. The SLAR images selected for these discs show significantly different geologic features and include the Long Valley caldera, a section of the San Andreas fault in the Monterey area, the Grand Canyon, and glaciers in southeastern Alaska. At present, several CD-ROMs are available as standard products distributed by the USGS EROS Data Center in Sioux Falls, South Dakota 57198. This is also the source for all USGS SLAR photographic and digital material.« less

  8. Learning of Multimodal Representations With Random Walks on the Click Graph.

    PubMed

    Wu, Fei; Lu, Xinyan; Song, Jun; Yan, Shuicheng; Zhang, Zhongfei Mark; Rui, Yong; Zhuang, Yueting

    2016-02-01

    In multimedia information retrieval, most classic approaches tend to represent different modalities of media in the same feature space. With the click data collected from the users' searching behavior, existing approaches take either one-to-one paired data (text-image pairs) or ranking examples (text-query-image and/or image-query-text ranking lists) as training examples, which do not make full use of the click data, particularly the implicit connections among the data objects. In this paper, we treat the click data as a large click graph, in which vertices are images/text queries and edges indicate the clicks between an image and a query. We consider learning a multimodal representation from the perspective of encoding the explicit/implicit relevance relationship between the vertices in the click graph. By minimizing both the truncated random walk loss as well as the distance between the learned representation of vertices and their corresponding deep neural network output, the proposed model which is named multimodal random walk neural network (MRW-NN) can be applied to not only learn robust representation of the existing multimodal data in the click graph, but also deal with the unseen queries and images to support cross-modal retrieval. We evaluate the latent representation learned by MRW-NN on a public large-scale click log data set Clickture and further show that MRW-NN achieves much better cross-modal retrieval performance on the unseen queries/images than the other state-of-the-art methods.

  9. Representations of Codeine Misuse on Instagram: Content Analysis.

    PubMed

    Cherian, Roy; Westbrook, Marisa; Ramo, Danielle; Sarkar, Urmimala

    2018-03-20

    Prescription opioid misuse has doubled over the past 10 years and is now a public health epidemic. Analysis of social media data may provide additional insights into opioid misuse to supplement the traditional approaches of data collection (eg, self-report on surveys). The aim of this study was to characterize representations of codeine misuse through analysis of public posts on Instagram to understand text phrases related to misuse. We identified hashtags and searchable text phrases associated with codeine misuse by analyzing 1156 sequential Instagram posts over the course of 2 weeks from May 2016 to July 2016. Content analysis of posts associated with these hashtags identified the most common themes arising in images, as well as culture around misuse, including how misuse is happening and being perpetuated through social media. A majority of images (50/100; 50.0%) depicted codeine in its commonly misused form, combined with soda (lean). Codeine misuse was commonly represented with the ingestion of alcohol, cannabis, and benzodiazepines. Some images highlighted the previously noted affinity between codeine misuse and hip-hop culture or mainstream popular culture images. The prevalence of codeine misuse images, glamorizing of ingestion with soda and alcohol, and their integration with mainstream, popular culture imagery holds the potential to normalize and increase codeine misuse and overdose. To reduce harm and prevent misuse, immediate public health efforts are needed to better understand the relationship between the potential normalization, ritualization, and commercialization of codeine misuse. ©Roy Cherian, Marisa Westbrook, Danielle Ramo, Urmimala Sarkar. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 20.03.2018.

  10. Mystery #11 Answer

    Atmospheric Science Data Center

    2013-04-22

    article title:  MISR Mystery Image Quiz #11: Queensland, Australia     View Larger Image These Multi-angle Imaging SpectroRadiometer (MISR) images of ... MISR Team. Text acknowledgment: Clare Averill, David J. Diner, Graham Bothwell (Jet Propulsion Laboratory). Other formats ...

  11. Fast words boundaries localization in text fields for low quality document images

    NASA Astrophysics Data System (ADS)

    Ilin, Dmitry; Novikov, Dmitriy; Polevoy, Dmitry; Nikolaev, Dmitry

    2018-04-01

    The paper examines the problem of word boundaries precise localization in document text zones. Document processing on a mobile device consists of document localization, perspective correction, localization of individual fields, finding words in separate zones, segmentation and recognition. While capturing an image with a mobile digital camera under uncontrolled capturing conditions, digital noise, perspective distortions or glares may occur. Further document processing gets complicated because of its specifics: layout elements, complex background, static text, document security elements, variety of text fonts. However, the problem of word boundaries localization has to be solved at runtime on mobile CPU with limited computing capabilities under specified restrictions. At the moment, there are several groups of methods optimized for different conditions. Methods for the scanned printed text are quick but limited only for images of high quality. Methods for text in the wild have an excessively high computational complexity, thus, are hardly suitable for running on mobile devices as part of the mobile document recognition system. The method presented in this paper solves a more specialized problem than the task of finding text on natural images. It uses local features, a sliding window and a lightweight neural network in order to achieve an optimal algorithm speed-precision ratio. The duration of the algorithm is 12 ms per field running on an ARM processor of a mobile device. The error rate for boundaries localization on a test sample of 8000 fields is 0.3

  12. A new approach to pre-processing digital image for wavelet-based watermark

    NASA Astrophysics Data System (ADS)

    Agreste, Santa; Andaloro, Guido

    2008-11-01

    The growth of the Internet has increased the phenomenon of digital piracy, in multimedia objects, like software, image, video, audio and text. Therefore it is strategic to individualize and to develop methods and numerical algorithms, which are stable and have low computational cost, that will allow us to find a solution to these problems. We describe a digital watermarking algorithm for color image protection and authenticity: robust, not blind, and wavelet-based. The use of Discrete Wavelet Transform is motivated by good time-frequency features and a good match with Human Visual System directives. These two combined elements are important for building an invisible and robust watermark. Moreover our algorithm can work with any image, thanks to the step of pre-processing of the image that includes resize techniques that adapt to the size of the original image for Wavelet transform. The watermark signal is calculated in correlation with the image features and statistic properties. In the detection step we apply a re-synchronization between the original and watermarked image according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has been shown to be resistant against geometric, filtering, and StirMark attacks with a low rate of false alarm.

  13. Improving face image extraction by using deep learning technique

    NASA Astrophysics Data System (ADS)

    Xue, Zhiyun; Antani, Sameer; Long, L. R.; Demner-Fushman, Dina; Thoma, George R.

    2016-03-01

    The National Library of Medicine (NLM) has made a collection of over a 1.2 million research articles containing 3.2 million figure images searchable using the Open-iSM multimodal (text+image) search engine. Many images are visible light photographs, some of which are images containing faces ("face images"). Some of these face images are acquired in unconstrained settings, while others are studio photos. To extract the face regions in the images, we first applied one of the most widely-used face detectors, a pre-trained Viola-Jones detector implemented in Matlab and OpenCV. The Viola-Jones detector was trained for unconstrained face image detection, but the results for the NLM database included many false positives, which resulted in a very low precision. To improve this performance, we applied a deep learning technique, which reduced the number of false positives and as a result, the detection precision was improved significantly. (For example, the classification accuracy for identifying whether the face regions output by this Viola- Jones detector are true positives or not in a test set is about 96%.) By combining these two techniques (Viola-Jones and deep learning) we were able to increase the system precision considerably, while avoiding the need to manually construct a large training set by manual delineation of the face regions.

  14. Binarization algorithm for document image with complex background

    NASA Astrophysics Data System (ADS)

    Miao, Shaojun; Lu, Tongwei; Min, Feng

    2015-12-01

    The most important step in image preprocessing for Optical Character Recognition (OCR) is binarization. Due to the complex background or varying light in the text image, binarization is a very difficult problem. This paper presents the improved binarization algorithm. The algorithm can be divided into several steps. First, the background approximation can be obtained by the polynomial fitting, and the text is sharpened by using bilateral filter. Second, the image contrast compensation is done to reduce the impact of light and improve contrast of the original image. Third, the first derivative of the pixels in the compensated image are calculated to get the average value of the threshold, then the edge detection is obtained. Fourth, the stroke width of the text is estimated through a measuring of distance between edge pixels. The final stroke width is determined by choosing the most frequent distance in the histogram. Fifth, according to the value of the final stroke width, the window size is calculated, then a local threshold estimation approach can begin to binaries the image. Finally, the small noise is removed based on the morphological operators. The experimental result shows that the proposed method can effectively remove the noise caused by complex background and varying light.

  15. An Invisible Text Watermarking Algorithm using Image Watermark

    NASA Astrophysics Data System (ADS)

    Jalil, Zunera; Mirza, Anwar M.

    Copyright protection of digital contents is very necessary in today's digital world with efficient communication mediums as internet. Text is the dominant part of the internet contents and there are very limited techniques available for text protection. This paper presents a novel algorithm for protection of plain text, which embeds the logo image of the copyright owner in the text and this logo can be extracted from the text later to prove ownership. The algorithm is robust against content-preserving modifications and at the same time, is capable of detecting malicious tampering. Experimental results demonstrate the effectiveness of the algorithm against tampering attacks by calculating normalized hamming distances. The results are also compared with a recent work in this domain

  16. Statistical Properties of Ribbon Evolution and Reconnection Electric Fields in Eruptive and Confined Flares.

    PubMed

    Hinterreiter, J; Veronig, A M; Thalmann, J K; Tschernitz, J; Pötzi, W

    2018-01-01

    A statistical study of the chromospheric ribbon evolution in H[Formula: see text] two-ribbon flares was performed. The data set consists of 50 confined (62%) and eruptive (38%) flares that occurred from June 2000 to June 2015. The flares were selected homogeneously over the H[Formula: see text] and Geostationary Operational Environmental Satellite (GOES) classes, with an emphasis on including powerful confined flares and weak eruptive flares. H[Formula: see text] filtergrams from the Kanzelhöhe Observatory in combination with Michelson Doppler Imager (MDI) and Helioseismic and Magnetic Imager (HMI) magnetograms were used to derive the ribbon separation, the ribbon-separation velocity, the magnetic-field strength, and the reconnection electric field. We find that eruptive flares reveal statistically larger ribbon separation and higher ribbon-separation velocities than confined flares. In addition, the ribbon separation of eruptive flares correlates with the GOES SXR flux, whereas no clear dependence was found for confined flares. The maximum ribbon-separation velocity is not correlated with the GOES flux, but eruptive flares reveal on average a higher ribbon-separation velocity (by ≈ 10 km s -1 ). The local reconnection electric field of confined ([Formula: see text]) and eruptive ([Formula: see text]) flares correlates with the GOES flux, indicating that more powerful flares involve stronger reconnection electric fields. In addition, eruptive flares with higher electric-field strengths tend to be accompanied by faster coronal mass ejections. The online version of this article (10.1007/s11207-018-1253-1) contains supplementary material, which is available to authorized users.

  17. Word-level recognition of multifont Arabic text using a feature vector matching approach

    NASA Astrophysics Data System (ADS)

    Erlandson, Erik J.; Trenkle, John M.; Vogt, Robert C., III

    1996-03-01

    Many text recognition systems recognize text imagery at the character level and assemble words from the recognized characters. An alternative approach is to recognize text imagery at the word level, without analyzing individual characters. This approach avoids the problem of individual character segmentation, and can overcome local errors in character recognition. A word-level recognition system for machine-printed Arabic text has been implemented. Arabic is a script language, and is therefore difficult to segment at the character level. Character segmentation has been avoided by recognizing text imagery of complete words. The Arabic recognition system computes a vector of image-morphological features on a query word image. This vector is matched against a precomputed database of vectors from a lexicon of Arabic words. Vectors from the database with the highest match score are returned as hypotheses for the unknown image. Several feature vectors may be stored for each word in the database. Database feature vectors generated using multiple fonts and noise models allow the system to be tuned to its input stream. Used in conjunction with database pruning techniques, this Arabic recognition system has obtained promising word recognition rates on low-quality multifont text imagery.

  18. Boost OCR accuracy using iVector based system combination approach

    NASA Astrophysics Data System (ADS)

    Peng, Xujun; Cao, Huaigu; Natarajan, Prem

    2015-01-01

    Optical character recognition (OCR) is a challenging task because most existing preprocessing approaches are sensitive to writing style, writing material, noises and image resolution. Thus, a single recognition system cannot address all factors of real document images. In this paper, we describe an approach to combine diverse recognition systems by using iVector based features, which is a newly developed method in the field of speaker verification. Prior to system combination, document images are preprocessed and text line images are extracted with different approaches for each system, where iVector is transformed from a high-dimensional supervector of each text line and is used to predict the accuracy of OCR. We merge hypotheses from multiple recognition systems according to the overlap ratio and the predicted OCR score of text line images. We present evaluation results on an Arabic document database where the proposed method is compared against the single best OCR system using word error rate (WER) metric.

  19. Microstructure study of a severely plastically deformed Mg-Zn-Y alloy by application of low angle annular dark field diffraction contrast imaging.

    PubMed

    Basha, Dudekula Althaf; Rosalie, Julian M; Somekawa, Hidetoshi; Miyawaki, Takashi; Singh, Alok; Tsuchiya, Koichi

    2016-01-01

    Microstructural investigation of extremely strained samples, such as severely plastically deformed (SPD) materials, by using conventional transmission electron microscopy techniques is very challenging due to strong image contrast resulting from the high defect density. In this study, low angle annular dark field (LAADF) imaging mode of scanning transmission electron microscope (STEM) has been applied to study the microstructure of a Mg-3Zn-0.5Y (at%) alloy processed by high pressure torsion (HPT). LAADF imaging advantages for observation of twinning, grain fragmentation, nucleation of recrystallized grains and precipitation on second phase particles in the alloy processed by HPT are highlighted. By using STEM-LAADF imaging with a range of incident angles, various microstructural features have been imaged, such as nanoscale subgrain structure and recrystallization nucleation even from the thicker region of the highly strained matrix. It is shown that nucleation of recrystallized grains starts at a strain level of revolution [Formula: see text] (earlier than detected by conventional bright field imaging). Occurrence of recrystallization of grains by nucleating heterogeneously on quasicrystalline particles is also confirmed. Minimizing all strain effects by LAADF imaging facilitated grain size measurement of [Formula: see text] nm in fully recrystallized HPT specimen after [Formula: see text].

  20. Intelligent retrieval of medical images from the Internet

    NASA Astrophysics Data System (ADS)

    Tang, Yau-Kuo; Chiang, Ted T.

    1996-05-01

    The object of this study is using Internet resources to provide a cost-effective, user-friendly method to access the medical image archive system and to provide an easy method for the user to identify the images required. This paper describes the prototype system architecture, the implementation, and results. In the study, we prototype the Intelligent Medical Image Retrieval (IMIR) system as a Hypertext Transport Prototype server and provide Hypertext Markup Language forms for user, as an Internet client, using browser to enter image retrieval criteria for review. We are developing the intelligent retrieval engine, with the capability to map the free text search criteria to the standard terminology used for medical image identification. We evaluate retrieved records based on the number of the free text entries matched and their relevance level to the standard terminology. We are in the integration and testing phase. We have collected only a few different types of images for testing and have trained a few phrases to map the free text to the standard medical terminology. Nevertheless, we are able to demonstrate the IMIR's ability to search, retrieve, and review medical images from the archives using general Internet browser. The prototype also uncovered potential problems in performance, security, and accuracy. Additional studies and enhancements will make the system clinically operational.

  1. Arabic word recognizer for mobile applications

    NASA Astrophysics Data System (ADS)

    Khanna, Nitin; Abdollahian, Golnaz; Brame, Ben; Boutin, Mireille; Delp, Edward J.

    2011-03-01

    When traveling in a region where the local language is not written using a "Roman alphabet," translating written text (e.g., documents, road signs, or placards) is a particularly difficult problem since the text cannot be easily entered into a translation device or searched using a dictionary. To address this problem, we are developing the "Rosetta Phone," a handheld device (e.g., PDA or mobile telephone) capable of acquiring an image of the text, locating the region (word) of interest within the image, and producing both an audio and a visual English interpretation of the text. This paper presents a system targeted for interpreting words written in Arabic script. The goal of this work is to develop an autonomous, segmentation-free Arabic phrase recognizer, with computational complexity low enough to deploy on a mobile device. A prototype of the proposed system has been deployed on an iPhone with a suitable user interface. The system was tested on a number of noisy images, in addition to the images acquired from the iPhone's camera. It identifies Arabic words or phrases by extracting appropriate features and assigning "codewords" to each word or phrase. On a dictionary of 5,000 words, the system uniquely mapped (word-image to codeword) 99.9% of the words. The system has a 82% recognition accuracy on images of words captured using the iPhone's built-in camera.

  2. Systematic literature review of digital three-dimensional superimposition techniques to create virtual dental patients.

    PubMed

    Joda, Tim; Brägger, Urs; Gallucci, German

    2015-01-01

    Digital developments have led to the opportunity to compose simulated patient models based on three-dimensional (3D) skeletal, facial, and dental imaging. The aim of this systematic review is to provide an update on the current knowledge, to report on the technical progress in the field of 3D virtual patient science, and to identify further research needs to accomplish clinical translation. Searches were performed electronically (MEDLINE and OVID) and manually up to March 2014 for studies of 3D fusion imaging to create a virtual dental patient. Inclusion criteria were limited to human studies reporting on the technical protocol for superimposition of at least two different 3D data sets and medical field of interest. Of the 403 titles originally retrieved, 51 abstracts and, subsequently, 21 full texts were selected for review. Of the 21 full texts, 18 studies were included in the systematic review. Most of the investigations were designed as feasibility studies. Three different types of 3D data were identified for simulation: facial skeleton, extraoral soft tissue, and dentition. A total of 112 patients were investigated in the development of 3D virtual models. Superimposition of data on the facial skeleton, soft tissue, and/or dentition is a feasible technique to create a virtual patient under static conditions. Three-dimensional image fusion is of interest and importance in all fields of dental medicine. Future research should focus on the real-time replication of a human head, including dynamic movements, capturing data in a single step.

  3. Text extraction from images in the wild using the Viola-Jones algorithm

    NASA Astrophysics Data System (ADS)

    Saabna, Raid M.; Zingboim, Eran

    2018-04-01

    Text Localization and extraction is an important issue in modern applications of computer vision. Applications such as reading and translating texts in the wild or from videos are among the many applications that can benefit results of this field. In this work, we adopt the well-known Viola-Jones algorithm to enable text extraction and localization from images in the wild. The Viola-Jones is an efficient, and a fast image-processing algorithm originally used for face detection. Based on some resemblance between text and face detection tasks in the wild, we have modified the viola-jones to detect regions of interest where text may be localized. In the proposed approach, some modification to the HAAR like features and a semi-automatic process of data set generating and manipulation were presented to train the algorithm. A process of sliding windows with different sizes have been used to scan the image for individual letters and letter clusters existence. A post processing step is used in order to combine the detected letters into words and to remove false positives. The novelty of the presented approach is using the strengths of a modified Viola-Jones algorithm to identify many different objects representing different letters and clusters of similar letters and later combine them into words of varying lengths. Impressive results were obtained on the ICDAR contest data sets.

  4. Biological Oceanography

    NASA Astrophysics Data System (ADS)

    Dyhrman, Sonya

    2004-10-01

    The ocean is arguably the largest habitat on the planet, and it houses an astounding array of life, from microbes to whales. As a testament to this diversity and its importance, the discipline of biological oceanography spans studies of all levels of biological organization, from that of single genes, to organisms, to their population dynamics. Biological oceanography also includes studies on how organisms interact with, and contribute to, essential global processes. Students of biological oceanography are often as comfortable looking at satellite images as they are electron micrographs. This diversity of perspective begins the textbook Biological Oceanography, with cover graphics including a Coastal Zone Color Scanner image representing chlorophyll concentration, an electron micrograph of a dinoflagellate, and a photograph of a copepod. These images instantly capture the reader's attention and illustrate some of the different scales on which budding oceanographers are required to think. Having taught a core graduate course in biological oceanography for many years, Charlie Miller has used his lecture notes as the genesis for this book. The text covers the subject of biological oceanography in a manner that is targeted to introductory graduate students, but it would also be appropriate for advanced undergraduates.

  5. Improving the Raster Scanning Methods used with X-ray Fluorescence to See the Ancient Greek Text of Archimedes (SULI Paper)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griffin, Isabella B.; /Norfolk State U. /SLAC, SSRL

    2006-01-04

    X-ray fluorescence is being used to detect the ancient Greek copy of Archimedes work. The copy of Archimedes text was erased with a weak acid and written over to make a prayer book in the Middle Ages. The ancient parchment, made of goat skin, has on it some of Archimedes most valuable writings. The ink in the text contains iron which will fluoresce under x-ray radiation. My research project deals with the scanning and imaging process. The palimpsest is put in a stage that moves in a raster format. As the beam hits the parchment, a germanium detector detects themore » iron atoms and discriminates against other elements. Since the computer scans in both forwards and backwards directions, it is imperative that each row of data lines up exactly on top of the next row. There are several parameters to consider when scanning the parchment. These parameters include: speed, count time, shutter time, x-number of points, and acceleration. Formulas were made to relate these parameters together. During the actual beam time of this project, the scanning was very slow going; it took 30 hours to scan 1/2 of a page. Using the formulas, the scientists doubled distance and speed to scan the parchment faster; however, the grey scaled data was not lined up properly causing the images to look blurred. My project was is to find out why doubling the parameters caused blurred images, and to fix the problem if it is fixable.« less

  6. Breast and Gynecologic Cancer | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"184","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Breast and Gynecologic Cancer Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Breast and Gynecologic Cancer Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Breast and Gynecologic Cancer Research

  7. Lung and Upper Aerodigestive Cancer | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"180","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Lung and Upper Aerodigestive Cancer Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Lung and Upper Aerodigestive Cancer Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Lung and Upper Aerodigestive

  8. Teachers' Interpretations of Texts-Image Juxtapositions in Textbooks: From the Concrete to the Abstract

    ERIC Educational Resources Information Center

    Eilam, Billie; Poyas, Yael

    2012-01-01

    The paper examined expert literature teachers' coping with a novel textbook, integrating literature with visual arts, which is a particular interdisciplinary case of text-image relations in textbooks. Examination was performed within the framework of teachers' responses to curricular changes and of theory regarding strategies of interdisciplinary…

  9. Modeling semantic aspects for cross-media image indexing.

    PubMed

    Monay, Florent; Gatica-Perez, Daniel

    2007-10-01

    To go beyond the query-by-example paradigm in image retrieval, there is a need for semantic indexing of large image collections for intuitive text-based image search. Different models have been proposed to learn the dependencies between the visual content of an image set and the associated text captions, then allowing for the automatic creation of semantic indices for unannotated images. The task, however, remains unsolved. In this paper, we present three alternatives to learn a Probabilistic Latent Semantic Analysis model (PLSA) for annotated images, and evaluate their respective performance for automatic image indexing. Under the PLSA assumptions, an image is modeled as a mixture of latent aspects that generates both image features and text captions, and we investigate three ways to learn the mixture of aspects. We also propose a more discriminative image representation than the traditional Blob histogram, concatenating quantized local color information and quantized local texture descriptors. The first learning procedure of a PLSA model for annotated images is a standard EM algorithm, which implicitly assumes that the visual and the textual modalities can be treated equivalently. The other two models are based on an asymmetric PLSA learning, allowing to constrain the definition of the latent space on the visual or on the textual modality. We demonstrate that the textual modality is more appropriate to learn a semantically meaningful latent space, which translates into improved annotation performance. A comparison of our learning algorithms with respect to recent methods on a standard dataset is presented, and a detailed evaluation of the performance shows the validity of our framework.

  10. NOAA Data Rescue of Key Solar Databases and Digitization of Historical Solar Images

    NASA Astrophysics Data System (ADS)

    Coffey, H. E.

    2006-08-01

    Over a number of years, the staff at NOAA National Geophysical Data Center (NGDC) has worked to rescue key solar databases by converting them to digital format and making them available via the World Wide Web. NOAA has had several data rescue programs where staff compete for funds to rescue important and critical historical data that are languishing in archives and at risk of being lost due to deteriorating condition, loss of any metadata or descriptive text that describe the databases, lack of interest or funding in maintaining databases, etc. The Solar-Terrestrial Physics Division at NGDC was able to obtain funds to key in some critical historical tabular databases. Recently the NOAA Climate Database Modernization Program (CDMP) funded a project to digitize historical solar images, producing a large online database of historical daily full disk solar images. The images include the wavelengths Calcium K, Hydrogen Alpha, and white light photos, as well as sunspot drawings and the comprehensive drawings of a multitude of solar phenomena on one daily map (Fraunhofer maps and Wendelstein drawings). Included in the digitization are high resolution solar H-alpha images taken at the Boulder Solar Observatory 1967-1984. The scanned daily images document many phases of solar activity, from decadal variation to rotational variation to daily changes. Smaller versions are available online. Larger versions are available by request. See http://www.ngdc.noaa.gov/stp/SOLAR/ftpsolarimages.html. The tabular listings and solar imagery will be discussed.

  11. Managing biomedical image metadata for search and retrieval of similar images.

    PubMed

    Korenblum, Daniel; Rubin, Daniel; Napel, Sandy; Rodriguez, Cesar; Beaulieu, Chris

    2011-08-01

    Radiology images are generally disconnected from the metadata describing their contents, such as imaging observations ("semantic" metadata), which are usually described in text reports that are not directly linked to the images. We developed a system, the Biomedical Image Metadata Manager (BIMM) to (1) address the problem of managing biomedical image metadata and (2) facilitate the retrieval of similar images using semantic feature metadata. Our approach allows radiologists, researchers, and students to take advantage of the vast and growing repositories of medical image data by explicitly linking images to their associated metadata in a relational database that is globally accessible through a Web application. BIMM receives input in the form of standard-based metadata files using Web service and parses and stores the metadata in a relational database allowing efficient data query and maintenance capabilities. Upon querying BIMM for images, 2D regions of interest (ROIs) stored as metadata are automatically rendered onto preview images included in search results. The system's "match observations" function retrieves images with similar ROIs based on specific semantic features describing imaging observation characteristics (IOCs). We demonstrate that the system, using IOCs alone, can accurately retrieve images with diagnoses matching the query images, and we evaluate its performance on a set of annotated liver lesion images. BIMM has several potential applications, e.g., computer-aided detection and diagnosis, content-based image retrieval, automating medical analysis protocols, and gathering population statistics like disease prevalences. The system provides a framework for decision support systems, potentially improving their diagnostic accuracy and selection of appropriate therapies.

  12. An overview of remote sensing and geodesy for epidemiology and public health application.

    PubMed

    Hay, S I

    2000-01-01

    The techniques of remote sensing (RS) and geodesy have the potential to revolutionize the discipline of epidemiology and its application in human health. As a new departure from conventional epidemiological methods, these techniques require some detailed explanation. This review provides the theoretical background to RS including (i) its physical basis, (ii) an explanation of the orbital characteristics and specifications of common satellite sensor systems, (iii) details of image acquisition and procedures adopted to overcome inherent sources of data degradation, and (iv) a background to geophysical data preparation. This information allows RS applications in epidemiology to be readily interpreted. Some of the techniques used in geodesy, to locate features precisely on Earth so that they can be registered to satellite sensor-derived images, are also included. While the basic principles relevant to public health are presented here, inevitably many of the details must be left to specialist texts.

  13. An Overview of Remote Sensing and Geodesy for Epidemiology and Public Health Application

    PubMed Central

    Hay, S.I.

    2011-01-01

    The techniques of remote sensing (RS) and geodesy have the potential to revolutionize the discipline of epidemiology and its application in human health. As a new departure from conventional epidemiological methods, these techniques require some detailed explanation. This review provides the theoretical background to RS including (i) its physical basis, (ii) an explanation of the orbital characteristics and specifications of common satellite sensor systems, (iii) details of image acquisition and procedures adopted to overcome inherent sources of data degradation, and (iv) a background to geophysical data preparation. This information allows RS applications in epidemiology to be readily interpreted. Some of the techniques used in geodesy, to locate features precisely on Earth so that they can be registered to satellite sensor-derived images, are also included. While the basic principles relevant to public health are presented here, inevitably many of the details must be left to specialist texts. PMID:10997203

  14. Archive of chirp seismic reflection data collected during USGS cruises 00SCC02 and 00SCC04, Barataria Basin, Louisiana, May 12-31 and June 17-July 2, 2000

    USGS Publications Warehouse

    Calderon, Karynna; Dadisman, S.V.; Kindinger, J.L.; Flocks, J.G.; Wiese, D.S.; Kulp, Mark; Penland, Shea; Britsch, L.D.; Brooks, G.R.

    2003-01-01

    This archive consists of two-dimensional marine seismic reflection profile data collected in the Barataria Basin of southern Louisiana. These data were acquired in May, June, and July of 2000 aboard the R/V G.K. Gilbert. Included here are data in a variety of formats including binary, American Standard Code for Information Interchange (ASCII), Hyper-Text Markup Language (HTML), shapefiles, and Graphics Interchange Format (GIF) and Joint Photographic Experts Group (JPEG) images. Binary data are in Society of Exploration Geophysicists (SEG) SEG-Y format and may be downloaded for further processing or display. Reference maps and GIF images of the profiles may be viewed with a web browser. The Geographic Information Systems (GIS) information provided here is compatible with Environmental Systems Research Institute (ESRI) GIS software.

  15. Sexting among singles in the USA: prevalence of sending, receiving, and sharing sexual messages and images.

    PubMed

    Garcia, Justin R; Gesselman, Amanda N; Siliman, Shadia A; Perry, Brea L; Coe, Kathryn; Fisher, Helen E

    2016-07-29

    Background: The transmission of sexual images and messages via mobile phone or other electronic media (sexting) has been associated with a variety of mostly negative social and behavioural consequences. Research on sexting has focussed on youth, with limited data across demographics and with little known about the sharing of private sexual images and messages with third parties. Methods: The present study examines sexting attitudes and behaviours, including sending, receiving, and sharing of sexual messages and images, across gender, age, and sexual orientation. A total of 5805 single adults were included in the study (2830 women; 2975 men), ranging in age from 21 to 75+ years. Results: Overall, 21% of participants reported sending and 28% reported receiving sexually explicit text messages; both sending and receiving 'sexts' was most common among younger respondents. Although 73.2% of participants reported discomfort with unauthorised sharing of sexts beyond the intended recipient, of those who had received sext images, 22.9% reported sharing them with others (on average with 3.17 friends). Participants also reported concern about the potential consequences of sexting on their social lives, careers, and psychosocial wellbeing. Conclusion: Views on the impact of sexting on reputation suggest a contemporary struggle to reconcile digital eroticism with real-world consequences. These findings suggest a need for future research into negotiations of sexting motivations, risks, and rewards.

  16. 3D/2D model-to-image registration by imitation learning for cardiac procedures.

    PubMed

    Toth, Daniel; Miao, Shun; Kurzendorfer, Tanja; Rinaldi, Christopher A; Liao, Rui; Mansi, Tommaso; Rhode, Kawal; Mountney, Peter

    2018-05-12

    In cardiac interventions, such as cardiac resynchronization therapy (CRT), image guidance can be enhanced by involving preoperative models. Multimodality 3D/2D registration for image guidance, however, remains a significant research challenge for fundamentally different image data, i.e., MR to X-ray. Registration methods must account for differences in intensity, contrast levels, resolution, dimensionality, field of view. Furthermore, same anatomical structures may not be visible in both modalities. Current approaches have focused on developing modality-specific solutions for individual clinical use cases, by introducing constraints, or identifying cross-modality information manually. Machine learning approaches have the potential to create more general registration platforms. However, training image to image methods would require large multimodal datasets and ground truth for each target application. This paper proposes a model-to-image registration approach instead, because it is common in image-guided interventions to create anatomical models for diagnosis, planning or guidance prior to procedures. An imitation learning-based method, trained on 702 datasets, is used to register preoperative models to intraoperative X-ray images. Accuracy is demonstrated on cardiac models and artificial X-rays generated from CTs. The registration error was [Formula: see text] on 1000 test cases, superior to that of manual ([Formula: see text]) and gradient-based ([Formula: see text]) registration. High robustness is shown in 19 clinical CRT cases. Besides the proposed methods feasibility in a clinical environment, evaluation has shown good accuracy and high robustness indicating that it could be applied in image-guided interventions.

  17. Intra-operative fiducial-based CT/fluoroscope image registration framework for image-guided robot-assisted joint fracture surgery.

    PubMed

    Dagnino, Giulio; Georgilas, Ioannis; Morad, Samir; Gibbons, Peter; Tarassoli, Payam; Atkins, Roger; Dogramadzi, Sanja

    2017-08-01

    Joint fractures must be accurately reduced minimising soft tissue damages to avoid negative surgical outcomes. To this regard, we have developed the RAFS surgical system, which allows the percutaneous reduction of intra-articular fractures and provides intra-operative real-time 3D image guidance to the surgeon. Earlier experiments showed the effectiveness of the RAFS system on phantoms, but also key issues which precluded its use in a clinical application. This work proposes a redesign of the RAFS's navigation system overcoming the earlier version's issues, aiming to move the RAFS system into a surgical environment. The navigation system is improved through an image registration framework allowing the intra-operative registration between pre-operative CT images and intra-operative fluoroscopic images of a fractured bone using a custom-made fiducial marker. The objective of the registration is to estimate the relative pose between a bone fragment and an orthopaedic manipulation pin inserted into it intra-operatively. The actual pose of the bone fragment can be updated in real time using an optical tracker, enabling the image guidance. Experiments on phantom and cadavers demonstrated the accuracy and reliability of the registration framework, showing a reduction accuracy (sTRE) of about [Formula: see text] (phantom) and [Formula: see text] (cadavers). Four distal femur fractures were successfully reduced in cadaveric specimens using the improved navigation system and the RAFS system following the new clinical workflow (reduction error [Formula: see text], [Formula: see text]. Experiments showed the feasibility of the image registration framework. It was successfully integrated into the navigation system, allowing the use of the RAFS system in a realistic surgical application.

  18. S201 catalog of far-ultraviolet objects

    NASA Technical Reports Server (NTRS)

    Page, T.; Carruthers, G. K.; Hill, R. E.

    1978-01-01

    A catalog of star images was compiled from images obtained by an NRL Far-Ultraviolet Camera/Spectrograph operated from 21 to 23 April 1972 on the lunar surface during the Apollo-16 mission. These images were scanned on a microdensitometer, and the output recorded on magnetic tapes. The catalog is divided into 11 parts, covering ten fields in the sky (the Sagittarius field being covered by two parts), and each part is headed by a constellation name and the field center coordinates. The errors in position of the detected images are less than about 3 arc-min. Correlations are given with star numbers in the Smithsonian Astrophysical Observatory catalog. Values are given of the peak density and the density volume. The text includes a discussion of the photometry, corrections thereto due to threshold and saturation effects, and its comparison with theoretical expectation, stellar model atmospheres, and a generalized far-ultraviolet interstellar extinction law. The S201 catalog is also available on a single reel of seven-track magnetic tape.

  19. Volumetric in vivo imaging of microvascular perfusion within the intact cochlea in mice using ultra-high sensitive optical microangiography.

    PubMed

    Subhash, Hrebesh M; Davila, Viviana; Sun, Hai; Nguyen-Huynh, Anh T; Shi, Xiaorui; Nuttall, Alfred L; Wang, Ruikang K

    2011-02-01

    Studying the inner ear microvascular dynamics is extremely important to understand the cochlear function and to further advance the diagnosis, prevention, and treatment of many otologic disorders. However, there is currently no effective imaging tool available that is able to access the blood flow within the intact cochlea. In this paper, we report the use of an ultrahigh sensitive optical micro-angiography (UHS-OMAG) imaging system to image 3-D microvascular perfusion within the intact cochlea in living mice. The UHS-OMAG image system used in this study is based on spectral domain optical coherence tomography, which uses a broadband light source centered at 1300 nm with an imaging rate of 47[Formula: see text] 000 A-scans/s, capable of acquiring high-resolution B scans at 300 frames/s. The technique is sensitive enough to image very slow blood flow velocities, such as those found in capillary networks. The 3-D imaging acquisition time for a whole cochlea is  ∼ 4.1 s. We demonstrate that volumetric reconstruction of microvascular flow obtained by UHS-OMAG provides a comprehensive perfusion map of several regions of the cochlea, including the otic capsule, the stria vascularis of the apical and middle turns and the radiating arterioles that emanate from the modiolus.

  20. ATLAS Live: Collaborative Information Streams

    NASA Astrophysics Data System (ADS)

    Goldfarb, Steven; ATLAS Collaboration

    2011-12-01

    I report on a pilot project launched in 2010 focusing on facilitating communication and information exchange within the ATLAS Collaboration, through the combination of digital signage software and webcasting. The project, called ATLAS Live, implements video streams of information, ranging from detailed detector and data status to educational and outreach material. The content, including text, images, video and audio, is collected, visualised and scheduled using digital signage software. The system is robust and flexible, utilizing scripts to input data from remote sources, such as the CERN Document Server, Indico, or any available URL, and to integrate these sources into professional-quality streams, including text scrolling, transition effects, inter and intra-screen divisibility. Information is published via the encoding and webcasting of standard video streams, viewable on all common platforms, using a web browser or other common video tool. Authorisation is enforced at the level of the streaming and at the web portals, using the CERN SSO system.

  1. Innovative uses of GigaPan Technology for Onsite and Distance Education

    NASA Astrophysics Data System (ADS)

    Bentley, C.; Schott, R. C.; Piatek, J. L.; Richards, B.

    2013-12-01

    GigaPans are gigapixel panoramic images that can be viewed at a wide range of magnifications, allowing users to explore them in various degrees of detail from the smallest scale to the full image extent. In addition to panoramic images captured with the GigaPan camera mount ('Dry Falls' - http://www.gigapan.com/gigapans/89093), users can also upload annotated images (For example, 'Massanutten sandstone slab with trace fossils (annotated)', http://www.gigapan.com/gigapans/124295) and satellite images (For example, 'Geology vs. Topography - State of Connecticut', http://www.gigapan.com/gigapans/111265). Panoramas with similar topics have been gathered together on the site in galleries, both user-generated and site-curated (For example, http://www.gigapan.com/galleries?categories=geology&page=1). Further innovations in display technology have also led to the development of improved viewers (for example, the annotations in the image linked above can be explored via paired viewers at http://coursecontent.nic.edu/bdrichards/gigapixelimages/callanview) GigaPan panoramas can be created through use of the GigaPan robotic camera mount and a digital camera (different models of the camera mount are available and work with a wide range of cameras). The camera mount can be used to create high-resolution pans ranging in scale from hand sample to outcrop up to landscape via the stitching software included with the robotic mount. The software can also be used to generate GigaPan images from other sources, such as thin section or satellite images, so these images can also be viewed with the online viewer. GigaPan images are typically viewed via a web-based interface that allows the user to interact with the image from the limits of the image detail up to the full panorama. After uploading, information can be added to panoramas with both text captions and geo-referencing (geo-located panoramas can then be viewed in Google Earth). Users can record specific locations and zoom levels in these images via "snapshots": these snapshots can direct others to the same location in the image as well as generate conversations with attached text comments. Users can also group related GigaPans by creating "galleries" of thematically related images (similar to photo albums). Gigapixel images can also be formatted for processing and viewing in an increasing number of platforms/modes as software vendors and internet browsers begin to provide 'add-in' support. This opens up opportunities for innovative adaptations for geoscience education. (For example, http://coursecontent.nic.edu/bdrichards/gigapixelimages/dryfalls) Specific applications of these images for geoscience educations include classroom activities and independent exercises that encourage students to take an active inquiry-based approach to understanding geoscience concepts at multiple skill levels. GigaPans in field research serve as both records of field locations and additional datasets for detailed analyses, such as observing color changes or variations in grain size. Related GigaPans can be also be presented together when embedded in webpages, useful for generating exercises for education purposes or for analyses of outcrops from the macro (landscape, outcrop) down to the micro scale (hand sample, thin section).

  2. One Click to the Cosmos: The AstroPix Image Archive

    NASA Astrophysics Data System (ADS)

    Hurt, Robert L.; Llamas, J.; Squires, G. K.; Brinkworth, C.; X-ray Center, Chandra; ESO/ESA; Science Center, Spitzer; STScI

    2013-01-01

    Imagine a single website that acts as a portal to the entire wealth of public imagery spanning the world's observatories. This is the goal of the AstroPix project (astropix.ipac.caltech.edu), and you can use it today! Although still in a beta development state, this past year has seen the inclusion of thousands of images spanning some of the most prominent observatories in the world, including Chandra, ESO, Galex, Herschel, Hubble, Spitzer, and WISE, with more on the way. The archive is unique as it is built around the Astronomical Visualization Metadata (AVM) standard, which captures the rich contextual information for each image. This ranges from titles and descriptions, to color representations and observation details, to sky coordinates. AVM enables AstroPix imagery to be used in a variety of unique ways that benefit formal and informal education as well as astronomers and the general public. Visitors to Astropix can search the database using simple free-text queries, or use a structured search (similar to "Smart Playlists" found in iTunes, for example). We are also developing public application programming interfaces (APIs) to allow third party software and websites to access the growing content for a variety of uses (planetarium software, museum kiosks, mobile apps, and creative web interfaces, to name a few). Contributing image assets to AstroPix is as easy as tagging the images with the relevant metadata and including the web links to the images in a simple RSS feed. We will cover some of the latest information about tools to contribute images to AstroPix and ways to use the site.

  3. Ultrasound image-based thyroid nodule automatic segmentation using convolutional neural networks.

    PubMed

    Ma, Jinlian; Wu, Fa; Jiang, Tian'an; Zhao, Qiyu; Kong, Dexing

    2017-11-01

    Delineation of thyroid nodule boundaries from ultrasound images plays an important role in calculation of clinical indices and diagnosis of thyroid diseases. However, it is challenging for accurate and automatic segmentation of thyroid nodules because of their heterogeneous appearance and components similar to the background. In this study, we employ a deep convolutional neural network (CNN) to automatically segment thyroid nodules from ultrasound images. Our CNN-based method formulates a thyroid nodule segmentation problem as a patch classification task, where the relationship among patches is ignored. Specifically, the CNN used image patches from images of normal thyroids and thyroid nodules as inputs and then generated the segmentation probability maps as outputs. A multi-view strategy is used to improve the performance of the CNN-based model. Additionally, we compared the performance of our approach with that of the commonly used segmentation methods on the same dataset. The experimental results suggest that our proposed method outperforms prior methods on thyroid nodule segmentation. Moreover, the results show that the CNN-based model is able to delineate multiple nodules in thyroid ultrasound images accurately and effectively. In detail, our CNN-based model can achieve an average of the overlap metric, dice ratio, true positive rate, false positive rate, and modified Hausdorff distance as [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text] on overall folds, respectively. Our proposed method is fully automatic without any user interaction. Quantitative results also indicate that our method is so efficient and accurate that it can be good enough to replace the time-consuming and tedious manual segmentation approach, demonstrating the potential clinical applications.

  4. X-window-based 2K display workstation

    NASA Astrophysics Data System (ADS)

    Weinberg, Wolfram S.; Hayrapetian, Alek S.; Cho, Paul S.; Valentino, Daniel J.; Taira, Ricky K.; Huang, H. K.

    1991-07-01

    A high-definition, high-performance display station for reading and review of digital radiological images is introduced. The station is based on a Sun SPARC Station 4 and employs X window system for display and manipulation of images. A mouse-operated graphic user interface is implemented utilizing Motif-style tools. The system supports up to four MegaScan gray-scale 2560 X 2048 monitors. A special configuration of frame and video buffer yields a data transfer of 50 M pixels/s. A magnetic disk array supplies a storage capacity of 2 GB with a data transfer rate of 4-6 MB/s. The system has access to the central archive through an ultrahigh-speed fiber-optic network and patient studies are automatically transferred to the local disk. The available image processing functions include change of lookup table, zoom and pan, and cine. Future enhancements will provide for manual contour tracing, length, area, and density measurements, text and graphic overlay, as well as composition of selected images. Additional preprocessing procedures under development will optimize the initial lookup table and adjust the images to a standard orientation.

  5. Machine Learning and Radiology

    PubMed Central

    Wang, Shijun; Summers, Ronald M.

    2012-01-01

    In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers. PMID:22465077

  6. High-speed spatial frequency domain imaging of rat cortex detects dynamic optical and physiological properties following cardiac arrest and resuscitation.

    PubMed

    Wilson, Robert H; Crouzet, Christian; Torabzadeh, Mohammad; Bazrafkan, Afsheen; Farahabadi, Maryam H; Jamasian, Babak; Donga, Dishant; Alcocer, Juan; Zaher, Shuhab M; Choi, Bernard; Akbari, Yama; Tromberg, Bruce J

    2017-10-01

    Quantifying rapidly varying perturbations in cerebral tissue absorption and scattering can potentially help to characterize changes in brain function caused by ischemic trauma. We have developed a platform for rapid intrinsic signal brain optical imaging using macroscopically structured light. The device performs fast, multispectral, spatial frequency domain imaging (SFDI), detecting backscattered light from three-phase binary square-wave projected patterns, which have a much higher refresh rate than sinusoidal patterns used in conventional SFDI. Although not as fast as "single-snapshot" spatial frequency methods that do not require three-phase projection, square-wave patterns allow accurate image demodulation in applications such as small animal imaging where the limited field of view does not allow single-phase demodulation. By using 655, 730, and 850 nm light-emitting diodes, two spatial frequencies ([Formula: see text] and [Formula: see text]), three spatial phases (120 deg, 240 deg, and 360 deg), and an overall camera acquisition rate of 167 Hz, we map changes in tissue absorption and reduced scattering parameters ([Formula: see text] and [Formula: see text]) and oxy- and deoxyhemoglobin concentration at [Formula: see text]. We apply this method to a rat model of cardiac arrest (CA) and cardiopulmonary resuscitation (CPR) to quantify hemodynamics and scattering on temporal scales ([Formula: see text]) ranging from tens of milliseconds to minutes. We observe rapid concurrent spatiotemporal changes in tissue oxygenation and scattering during CA and following CPR, even when the cerebral electrical signal is absent. We conclude that square-wave SFDI provides an effective technical strategy for assessing cortical optical and physiological properties by balancing competing performance demands for fast signal acquisition, small fields of view, and quantitative information content.

  7. Ubiquitous picture-rich content representation

    NASA Astrophysics Data System (ADS)

    Wang, Wiley; Dean, Jennifer; Muzzolini, Russ

    2010-02-01

    The amount of digital images taken by the average consumer is consistently increasing. People enjoy the convenience of storing and sharing their pictures through online (digital) and offline (traditional) media. A set of pictures can be uploaded to: online photo services, web blogs and social network websites. Alternatively, these images can be used to generate: prints, cards, photo books or other photo products. Through uploading and sharing, images are easily transferred from one format to another. And often, a different set of associated content (text, tags) is created across formats. For example, on his web blog, a user may journal his experiences of his recent travel; on his social network website, his friends tag and comment on the pictures; in his online photo album, some pictures are titled and keyword-tagged. When the user wants to tell a complete story, perhaps in a photo book, he must collect, across all formats: the pictures, writings and comments, etc. and organize them in a book format. The user has to arrange the content of his trip in each format. The arrangement, the associations between the images, tags, keywords and text, cannot be shared with other formats. In this paper, we propose a system that allows the content to be easily created and shared across various digital media formats. We define a uniformed data association structure to connect: images, documents, comments, tags, keywords and other data. This content structure allows the user to switch representation formats without reediting. The framework under each format can emphasize (display or hide) content elements based on preference. For example, a slide show view will emphasize the display of pictures with limited text; a blog view will display highlighted images and journal text; and the photo book will try to fit in all images and text content. In this paper, we will discuss the strategy to associate pictures with text content, so that it can naturally tell a story. We will also list sample solutions on different formats such as: picture view, blog view and photo book view.

  8. Material classification and automatic content enrichment of images using supervised learning and knowledge bases

    NASA Astrophysics Data System (ADS)

    Mallepudi, Sri Abhishikth; Calix, Ricardo A.; Knapp, Gerald M.

    2011-02-01

    In recent years there has been a rapid increase in the size of video and image databases. Effective searching and retrieving of images from these databases is a significant current research area. In particular, there is a growing interest in query capabilities based on semantic image features such as objects, locations, and materials, known as content-based image retrieval. This study investigated mechanisms for identifying materials present in an image. These capabilities provide additional information impacting conditional probabilities about images (e.g. objects made of steel are more likely to be buildings). These capabilities are useful in Building Information Modeling (BIM) and in automatic enrichment of images. I2T methodologies are a way to enrich an image by generating text descriptions based on image analysis. In this work, a learning model is trained to detect certain materials in images. To train the model, an image dataset was constructed containing single material images of bricks, cloth, grass, sand, stones, and wood. For generalization purposes, an additional set of 50 images containing multiple materials (some not used in training) was constructed. Two different supervised learning classification models were investigated: a single multi-class SVM classifier, and multiple binary SVM classifiers (one per material). Image features included Gabor filter parameters for texture, and color histogram data for RGB components. All classification accuracy scores using the SVM-based method were above 85%. The second model helped in gathering more information from the images since it assigned multiple classes to the images. A framework for the I2T methodology is presented.

  9. Scene text recognition in mobile applications by character descriptor and structure configuration.

    PubMed

    Yi, Chucai; Tian, Yingli

    2014-07-01

    Text characters and strings in natural scene can provide valuable information for many applications. Extracting text directly from natural scene images or videos is a challenging task because of diverse text patterns and variant background interferences. This paper proposes a method of scene text recognition from detected text regions. In text detection, our previously proposed algorithms are applied to obtain text regions from scene image. First, we design a discriminative character descriptor by combining several state-of-the-art feature detectors and descriptors. Second, we model character structure at each character class by designing stroke configuration maps. Our algorithm design is compatible with the application of scene text extraction in smart mobile devices. An Android-based demo system is developed to show the effectiveness of our proposed method on scene text information extraction from nearby objects. The demo system also provides us some insight into algorithm design and performance improvement of scene text extraction. The evaluation results on benchmark data sets demonstrate that our proposed scheme of text recognition is comparable with the best existing methods.

  10. Characteristics of effective electronic mail messages distributed to healthcare professionals in a hospital setting.

    PubMed

    Kaltschmidt, Jens; Schmitt, Simon P W; Pruszydlo, Markus G; Haefeli, Walter E

    2008-01-01

    Electronic mailing systems (e-mail) are an important means to disseminate information within electronic networks. However, in large business communities including the hectic environment of hospitals it may be difficult to induce account holders to read the e-mail. In two mailings disseminated in a large university hospital we evaluated the impact of e-mail layout (three e-mail text versions, two e-mails with graphics) on the willingness of its approximately 6500 recipients to seek additional electronic information and open an integrated link. Overall access rates after 90 days were 21.1 and 23.5% with more than 70% of the respondents opening the link within 3 days. Differences between different layouts were large and artwork text, HTML text, animated GIF, and static image prompted 1.2, 1.7, 1.8, and 2.3 times more often access than the courier plain text message (p

  11. Characteristics of Effective Electronic Mail Messages Distributed to Healthcare Professionals in a Hospital Setting

    PubMed Central

    Kaltschmidt, Jens; Schmitt, Simon P.W.; Pruszydlo, Markus G.; Haefeli, Walter E.

    2008-01-01

    Electronic mailing systems (e-mail) are an important means to disseminate information within electronic networks. However, in large business communities including the hectic environment of hospitals it may be difficult to induce account holders to read the e-mail. In two mailings disseminated in a large university hospital we evaluated the impact of e-mail layout (three e-mail text versions, two e-mails with graphics) on the willingness of its ∼6500 recipients to seek additional electronic information and open an integrated link. Overall access rates after 90 days were 21.1 and 23.5% with more than 70% of the respondents opening the link within 3 days. Differences between different layouts were large and artwork text, HTML text, animated GIF, and static image prompted 1.2, 1.7, 1.8, and 2.3 times more often access than the courier plain text message (p ≤ 0.001). This study revealed that layout is a major determinant of the success of an information campaign. PMID:18096910

  12. Collaborative Workspaces within Distributed Virtual Environments.

    DTIC Science & Technology

    1996-12-01

    such as a text document, a 3D model, or a captured image using a collaborative workspace called the InPerson Whiteboard . The Whiteboard contains a...commands for editing objects drawn on the screen. Finally, when the call is completed, the Whiteboard can be saved to a file for future use . IRIS Annotator... use , and a shared whiteboard that includes a number of multimedia annotation tools. Both systems are also mindful of bandwidth limitations and can

  13. Blogs: A New Frontier for School Discipline Issues. A Legal Memorandum: Quarterly Law Topics for School Leaders. Vol. 7, No. 1, Fall 2006

    ERIC Educational Resources Information Center

    Kirby, Elizabeth; Kallio, Brenda

    2006-01-01

    Blogging is a widely used means of communication for millions of Internet users around the world. Blogs, which are Web sites or Weblogs where entries may be posted on a regular basis, frequently serve as online diaries or commentaries and may include text, images, and links to other sources. Diaries are no longer kept under lock and key. Today's…

  14. "I Will Write to You with My Eyes": Reflective Text and Image Journals in the Undergraduate Classroom

    ERIC Educational Resources Information Center

    Hyland-Russell, Tara

    2014-01-01

    This article reports on a case study into students' perspectives on the use of "cahiers", reflective text and image journals. Narrative interviews and document analysis reveal that "cahiers" can be used effectively to engage students in course content and learning processes. Recent work in transformative learning…

  15. "Texts Like a Patchwork Quilt": Reading Picturebooks about Slavery

    ERIC Educational Resources Information Center

    Connolly, Paula T.

    2013-01-01

    This article examines narrative strategies present in picturebooks about slavery that feature quilts. Against the depicted dangers of slavery, images of quilts serve to offer a sense of hope and in that way they provide a means of discussing difficult subjects with very young readers. As a central image in these texts, the quilt is variously…

  16. What Mathematical Images Are in a Typical Mathematics Textbook? Implications for Students with Visual Impairments

    ERIC Educational Resources Information Center

    Emerson, Robert Wall; Anderson, Dawn

    2018-01-01

    Introduction: Visually impaired students (that is, those who are blind or have low vision) have difficulty accessing curricular material in mathematical textbooks because many mathematics texts have visual images that contain important content information that are not transcribed or described in digital versions of the texts. However, little is…

  17. What are the concerns and goals of women attending a urogynaecology clinic? Content analysis of free-text data from an electronic pelvic floor assessment questionnaire (ePAQ-PF).

    PubMed

    Gray, Thomas; Strickland, Scarlett; Pooranawattanakul, Sarita; Li, Weiguang; Campbell, Patrick; Jones, Georgina; Radley, Stephen

    2018-06-27

    Understanding patients' concerns and goals is essential for providing individualised care in urogynaecology. The study objectives were to undertake a content analysis of free-text concerns and goals recorded by patients using an electronic pelvic-floor questionnaire (ePAQ-PF) and measure how these related to self-reported symptom and health-related quality-of-life (HRQOL) data also recorded using ePAQ-PF. A total of 1996 consenting patients completed ePAQ-PF. Content analysis was undertaken of free-text responses to the item: 'Considering the issues that currently concern you the most, what do you hope to achieve from any help, advice or treatment?' Key content themes were identified by the lead researcher, and three researchers read and coded all recorded responses. Student's t test was used to compare ePAQ-PF domain scores for patients reporting concerns in the relevant domain with those who did not. In total, 63% of participants who completed the questionnaire, recorded at least one free-text item. Content analysis identified 1560 individual concerns coding into the 19 ePAQ-PF domains. Symptom scores were significantly higher for patients reporting free-text concerns in 18 domains (p < 0.05). Additional concerns relating specifically to body image were recorded by 11% of patients. Key areas of importance emerging for personal goals included cure/improvement, better understanding, incontinence pad use, sexual function and surgery. Free-text reporting in ePAQ-PF is utilised by patients and facilitates self-expression and discussion of issues impacting on HRQOL. The significant relationship between recorded free-text concerns and ePAQ-PF domain scores suggests convergent validity for the instrument. Development and psychometric testing of a domain to assess body image is proposed.

  18. Segmental Rescoring in Text Recognition

    DTIC Science & Technology

    2014-02-04

    description relates to rescoring text hypotheses in text recognition based on segmental features. Offline printed text and handwriting recognition (OHR) can... Handwriting , College Park, Md., 2006, which is incorporated by reference here. For the set of training images 202, a character modeler 208 receives

  19. Estimating Missing Features to Improve Multimedia Information Retrieval

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bagherjeiran, A; Love, N S; Kamath, C

    Retrieval in a multimedia database usually involves combining information from different modalities of data, such as text and images. However, all modalities of the data may not be available to form the query. The retrieval results from such a partial query are often less than satisfactory. In this paper, we present an approach to complete a partial query by estimating the missing features in the query. Our experiments with a database of images and their associated captions show that, with an initial text-only query, our completion method has similar performance to a full query with both image and text features.more » In addition, when we use relevance feedback, our approach outperforms the results obtained using a full query.« less

  20. An automatic segmentation method of a parameter-adaptive PCNN for medical images.

    PubMed

    Lian, Jing; Shi, Bin; Li, Mingcong; Nan, Ziwei; Ma, Yide

    2017-09-01

    Since pre-processing and initial segmentation steps in medical images directly affect the final segmentation results of the regions of interesting, an automatic segmentation method of a parameter-adaptive pulse-coupled neural network is proposed to integrate the above-mentioned two segmentation steps into one. This method has a low computational complexity for different kinds of medical images and has a high segmentation precision. The method comprises four steps. Firstly, an optimal histogram threshold is used to determine the parameter [Formula: see text] for different kinds of images. Secondly, we acquire the parameter [Formula: see text] according to a simplified pulse-coupled neural network (SPCNN). Thirdly, we redefine the parameter V of the SPCNN model by sub-intensity distribution range of firing pixels. Fourthly, we add an offset [Formula: see text] to improve initial segmentation precision. Compared with the state-of-the-art algorithms, the new method achieves a comparable performance by the experimental results from ultrasound images of the gallbladder and gallstones, magnetic resonance images of the left ventricle, and mammogram images of the left and the right breast, presenting the overall metric UM of 0.9845, CM of 0.8142, TM of 0.0726. The algorithm has a great potential to achieve the pre-processing and initial segmentation steps in various medical images. This is a premise for assisting physicians to detect and diagnose clinical cases.

  1. An Improved Text Localization Method for Natural Scene Images

    NASA Astrophysics Data System (ADS)

    Jiang, Mengdi; Cheng, Jianghua; Chen, Minghui; Ku, Xishu

    2018-01-01

    In order to extract text information effectively from natural scene image with complex background, multi-orientation perspective and multilingual languages, we present a new method based on the improved Stroke Feature Transform (SWT). Firstly, The Maximally Stable Extremal Region (MSER) method is used to detect text candidate regions. Secondly, the SWT algorithm is used in the candidate regions, which can improve the edge detection compared with tradition SWT method. Finally, the Frequency-tuned (FT) visual saliency is introduced to remove non-text candidate regions. The experiment results show that, the method can achieve good robustness for complex background with multi-orientation perspective, various characters and font sizes.

  2. Showing or Telling a Story: A Comparative Study of Public Education Texts in Multimodality and Monomodality

    ERIC Educational Resources Information Center

    Wang, Kelu

    2013-01-01

    Multimodal texts that combine words and images produce meaning in a different way from monomodal texts that rely on words. They differ not only in representing the subject matter, but also constructing relationships between text producers and text receivers. This article uses two multimodal texts and one monomodal written text as samples, which…

  3. Development and application of operational techniques for the inventory and monitoring of resources and uses for the Texas coastal zone. Volume 1: Text

    NASA Technical Reports Server (NTRS)

    Harwood, P. (Principal Investigator); Finley, R.; Mcculloch, S.; Malin, P. A.; Schell, J. A.

    1977-01-01

    The author has identified the following significant results. Image interpretation and computer-assisted techniques were developed to analyze LANDSAT scenes in support of resource inventory and monitoring requirements for the Texas coastal region. Land cover and land use maps, at a scale of 1:125,000 for the image interpretation product and 1:24,000 for the computer-assisted product, were generated covering four Texas coastal test sites. Classification schemes which parallel national systems were developed for each procedure, including 23 classes for image interpretation technique and 13 classes for the computer-assisted technique. Results indicate that LANDSAT-derived land cover and land use maps can be successfully applied to a variety of planning and management activities on the Texas coast. Computer-derived land/water maps can be used with tide gage data to assess shoreline boundaries for management purposes.

  4. ESARR: enhanced situational awareness via road sign recognition

    NASA Astrophysics Data System (ADS)

    Perlin, V. E.; Johnson, D. B.; Rohde, M. M.; Lupa, R. M.; Fiorani, G.; Mohammad, S.

    2010-04-01

    The enhanced situational awareness via road sign recognition (ESARR) system provides vehicle position estimates in the absence of GPS signal via automated processing of roadway fiducials (primarily directional road signs). Sign images are detected and extracted from vehicle-mounted camera system, and preprocessed and read via a custom optical character recognition (OCR) system specifically designed to cope with low quality input imagery. Vehicle motion and 3D scene geometry estimation enables efficient and robust sign detection with low false alarm rates. Multi-level text processing coupled with GIS database validation enables effective interpretation even of extremely low resolution low contrast sign images. In this paper, ESARR development progress will be reported on, including the design and architecture, image processing framework, localization methodologies, and results to date. Highlights of the real-time vehicle-based directional road-sign detection and interpretation system will be described along with the challenges and progress in overcoming them.

  5. The sources of Gessner's pictures for the Historia animalium.

    PubMed

    Kusukawa, S

    2010-07-01

    Gessner's sources for the pictures in his Historia animalium were varied in kind and in quality. This should be understood within the larger context of the Historia animalium in which Gessner sought to collect everything ever written about animals, an enterprise that could not be completed by a single individual. Just as Gessner did not distil or reduce similar texts but retained these as well as contradictory or false textual descriptions as part of a repository of knowledge, so also Gessner included several pictures of the same animal, false or badly drawn ones, and juxtaposed erroneous and 'true' images. The attribution of images to witnesses and correspondences also reflects Gessner's strategy to credit those who drew his attention to new information first. The sources of Gessner's images thus indicate how his visual world encompassed more than the strictly self-observable, and a pictorial practice that was intimately connected with textual traditions and intellectual networks.

  6. Mystery #11

    Atmospheric Science Data Center

    2013-04-22

    article title:  MISR Mystery Image Quiz #11     View Larger Image Here's another chance to play geographical detective! These images ... MISR Team. Text acknowledgment: Clare Averill, David J. Diner, Graham Bothwell (Jet Propulsion Laboratory). Other formats ...

  7. Vaccine Images on Twitter: Analysis of What Images are Shared

    PubMed Central

    Dredze, Mark

    2018-01-01

    Background Visual imagery plays a key role in health communication; however, there is little understanding of what aspects of vaccine-related images make them effective communication aids. Twitter, a popular venue for discussions related to vaccination, provides numerous images that are shared with tweets. Objective The objectives of this study were to understand how images are used in vaccine-related tweets and provide guidance with respect to the characteristics of vaccine-related images that correlate with the higher likelihood of being retweeted. Methods We collected more than one million vaccine image messages from Twitter and characterized various properties of these images using automated image analytics. We fit a logistic regression model to predict whether or not a vaccine image tweet was retweeted, thus identifying characteristics that correlate with a higher likelihood of being shared. For comparison, we built similar models for the sharing of vaccine news on Facebook and for general image tweets. Results Most vaccine-related images are duplicates (125,916/237,478; 53.02%) or taken from other sources, not necessarily created by the author of the tweet. Almost half of the images contain embedded text, and many include images of people and syringes. The visual content is highly correlated with a tweet’s textual topics. Vaccine image tweets are twice as likely to be shared as nonimage tweets. The sentiment of an image and the objects shown in the image were the predictive factors in determining whether an image was retweeted. Conclusions We are the first to study vaccine images on Twitter. Our findings suggest future directions for the study and use of vaccine imagery and may inform communication strategies around vaccination. Furthermore, our study demonstrates an effective study methodology for image analysis. PMID:29615386

  8. Vaccine Images on Twitter: Analysis of What Images are Shared.

    PubMed

    Chen, Tao; Dredze, Mark

    2018-04-03

    Visual imagery plays a key role in health communication; however, there is little understanding of what aspects of vaccine-related images make them effective communication aids. Twitter, a popular venue for discussions related to vaccination, provides numerous images that are shared with tweets. The objectives of this study were to understand how images are used in vaccine-related tweets and provide guidance with respect to the characteristics of vaccine-related images that correlate with the higher likelihood of being retweeted. We collected more than one million vaccine image messages from Twitter and characterized various properties of these images using automated image analytics. We fit a logistic regression model to predict whether or not a vaccine image tweet was retweeted, thus identifying characteristics that correlate with a higher likelihood of being shared. For comparison, we built similar models for the sharing of vaccine news on Facebook and for general image tweets. Most vaccine-related images are duplicates (125,916/237,478; 53.02%) or taken from other sources, not necessarily created by the author of the tweet. Almost half of the images contain embedded text, and many include images of people and syringes. The visual content is highly correlated with a tweet's textual topics. Vaccine image tweets are twice as likely to be shared as nonimage tweets. The sentiment of an image and the objects shown in the image were the predictive factors in determining whether an image was retweeted. We are the first to study vaccine images on Twitter. Our findings suggest future directions for the study and use of vaccine imagery and may inform communication strategies around vaccination. Furthermore, our study demonstrates an effective study methodology for image analysis. ©Tao Chen, Mark Dredze. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 03.04.2018.

  9. HIT: a new approach for hiding multimedia information in text

    NASA Astrophysics Data System (ADS)

    El-Kwae, Essam A.; Cheng, Li

    2002-04-01

    A new technique for hiding multimedia data in text, called the Hiding in Text (HIT) technique, is introduced. The HIT technique can transform any type of media represented by a long binary string into innocuous text that follows correct grammatical rules. This technique divides English words into types where each word can appear in any number of types. For each type, there is a dictionary, which maps words to binary codes. Marker types are special types whose words do not repeat in any other type. Each generated sentence must include at least one word from the marker type. In the hiding phase, a binary string is input to the HIT encoding algorithm, which then selects sentence templates at random. The output is a set of English sentences according to the selected templates and the dictionaries of types. In the retrieving phase, the HIT technique uses the position of the marker word to identify the template used to build each sentence. The proposed technique greatly improves the efficiency and the security features of previous solutions. Examples for hiding text and image information in a cover text are given to illustrate the HIT technique.

  10. Measurements of the ablation-front trajectory and low-mode nonuniformity in direct-drive implosions using x-ray self-emission shadowgraphy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michel, D. T.; Davis, A. K.; Armstrong, W.

    Self-emission x-ray shadowgraphy provides a method to measure the ablation-front trajectory and low-mode nonuniformity of a target imploded by directly illuminating a fusion capsule with laser beams. The technique uses time-resolved images of soft x-rays (> 1 keV) emitted from the coronal plasma of the target imaged onto an x-ray framing camera to determine the position of the ablation front. Methods used to accurately measure the ablation-front radius (more » $${\\it\\delta}R=\\pm 1.15~{\\rm\\mu}\\text{m}$$), image-to-image timing ($${\\it\\delta}({\\rm\\Delta}t)=\\pm 2.5$$ ps) and absolute timing ($${\\it\\delta}t=\\pm 10$$ ps) are presented. Angular averaging of the images provides an average radius measurement of$${\\it\\delta}(R_{\\text{av}})=\\pm 0.15~{\\rm\\mu}\\text{m}$$and an error in velocity of$${\\it\\delta}V/V=\\pm 3\\%$$. This technique was applied on the Omega Laser Facility and the National Ignition Facility.« less

  11. Measurements of the ablation-front trajectory and low-mode nonuniformity in direct-drive implosions using x-ray self-emission shadowgraphy

    DOE PAGES

    Michel, D. T.; Davis, A. K.; Armstrong, W.; ...

    2015-07-08

    Self-emission x-ray shadowgraphy provides a method to measure the ablation-front trajectory and low-mode nonuniformity of a target imploded by directly illuminating a fusion capsule with laser beams. The technique uses time-resolved images of soft x-rays (> 1 keV) emitted from the coronal plasma of the target imaged onto an x-ray framing camera to determine the position of the ablation front. Methods used to accurately measure the ablation-front radius (more » $${\\it\\delta}R=\\pm 1.15~{\\rm\\mu}\\text{m}$$), image-to-image timing ($${\\it\\delta}({\\rm\\Delta}t)=\\pm 2.5$$ ps) and absolute timing ($${\\it\\delta}t=\\pm 10$$ ps) are presented. Angular averaging of the images provides an average radius measurement of$${\\it\\delta}(R_{\\text{av}})=\\pm 0.15~{\\rm\\mu}\\text{m}$$and an error in velocity of$${\\it\\delta}V/V=\\pm 3\\%$$. This technique was applied on the Omega Laser Facility and the National Ignition Facility.« less

  12. Seeing and Reading Red: Hue and Color-word Correlation in Images and Attendant Text on the WWW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newsam, S

    2004-07-12

    This work represents an initial investigation into determining whether correlations actually exist between metadata and content descriptors in multimedia datasets. We provide a quantitative method for evaluating whether the hue of images on the WWW is correlated with the occurrence of color-words in metadata such as URLs, image names, and attendant text. It turns out that such a correlation does exist: the likelihood that a particular color appears in an image whose URL, name, and/or attendant text contains the corresponding color-word is generally at least twice the likelihood that the color appears in a randomly chosen image on the WWW.more » While this finding might not be significant in and of itself, it represents an initial step towards quantitatively establishing that other, perhaps more useful correlations exist. These correlations form the basis for exciting novel approaches that leverage semi-supervised datasets, such as the WWW, to overcome the semantic gap that has hampered progress in multimedia information retrieval for some time now.« less

  13. Tabular data, text, and graphical images in support of the 1995 National assessment of United States oil and gas resources

    USGS Publications Warehouse

    Charpentier, Ronald R.; Klett, T.R.; Obuch, R.C.; Brewton, J.D.

    1996-01-01

    This CD-ROM contains files in support of the 1995 USGS National assessment of United States oil and gas resources (DDS-30), which was published separately and summarizes the results of a 3-year study of the oil and gas resources of the onshore and state waters of the United States. The study describes about 560 oil and gas plays in the United States; confirmed and hypothetical, conventional and unconventional. A parallel study of the Federal offshore is being conducted by the U.S. Minerals Management Service. This CD-ROM contains files in multiple formats, so that almost any computer user can import them into word processors and spreadsheets. The tabular data include some tables not released in DDS-30. No proprietary data are released on this CD-ROM, but some tables of summary statistics from the proprietary files are provided. The complete text of DDS-30 is also available, as well as many figures. Also included are some of the programs used in the assessment, in source code and with supporting documentation. A companion CD-ROM (DDS-35) includes the map data and the same text data, but none of the tabular data or assessment programs.

  14. A Java viewer to publish Digital Imaging and Communications in Medicine (DICOM) radiologic images on the World Wide Web.

    PubMed

    Setti, E; Musumeci, R

    2001-06-01

    The world wide web is an exciting service that allows one to publish electronic documents made of text and images on the internet. Client software called a web browser can access these documents, and display and print them. The most popular browsers are currently Microsoft Internet Explorer (Microsoft, Redmond, WA) and Netscape Communicator (Netscape Communications, Mountain View, CA). These browsers can display text in hypertext markup language (HTML) format and images in Joint Photographic Expert Group (JPEG) and Graphic Interchange Format (GIF). Currently, neither browser can display radiologic images in native Digital Imaging and Communications in Medicine (DICOM) format. With the aim to publish radiologic images on the internet, we wrote a dedicated Java applet. Our software can display radiologic and histologic images in DICOM, JPEG, and GIF formats, and provides a a number of functions like windowing and magnification lens. The applet is compatible with some web browsers, even the older versions. The software is free and available from the author.

  15. Spatial and symbolic queries for 3D image data

    NASA Astrophysics Data System (ADS)

    Benson, Daniel C.; Zick, Gregory L.

    1992-04-01

    We present a query system for an object-oriented biomedical imaging database containing 3-D anatomical structures and their corresponding 2-D images. The graphical interface facilitates the formation of spatial queries, nonspatial or symbolic queries, and combined spatial/symbolic queries. A query editor is used for the creation and manipulation of 3-D query objects as volumes, surfaces, lines, and points. Symbolic predicates are formulated through a combination of text fields and multiple choice selections. Query results, which may include images, image contents, composite objects, graphics, and alphanumeric data, are displayed in multiple views. Objects returned by the query may be selected directly within the views for further inspection or modification, or for use as query objects in subsequent queries. Our image database query system provides visual feedback and manipulation of spatial query objects, multiple views of volume data, and the ability to combine spatial and symbolic queries. The system allows for incremental enhancement of existing objects and the addition of new objects and spatial relationships. The query system is designed for databases containing symbolic and spatial data. This paper discuses its application to data acquired in biomedical 3- D image reconstruction, but it is applicable to other areas such as CAD/CAM, geographical information systems, and computer vision.

  16. Java-based PACS and reporting system for nuclear medicine

    NASA Astrophysics Data System (ADS)

    Slomka, Piotr J.; Elliott, Edward; Driedger, Albert A.

    2000-05-01

    In medical imaging practice, images and reports often need be reviewed and edited from many locations. We have designed and implemented a Java-based Remote Viewing and Reporting System (JaRRViS) for a nuclear medicine department, which is deployed as a web service, at the fraction of the cost dedicated PACS systems. The system can be extended to other imaging modalities. JaRRViS interfaces to the clinical patient databases of imaging workstations. Specialized nuclear medicine applets support interactive displays of data such as 3-D gated SPECT with all the necessary options such as cine, filtering, dynamic lookup tables, and reorientation. The reporting module is implemented as a separate applet using Java Foundation Classes (JFC) Swing Editor Kit and allows composition of multimedia reports after selection and annotation of appropriate images. The reports are stored on the server in the HTML format. JaRRViS uses Java Servlets for the preparation and storage of final reports. The http links to the reports or to the patient's raw images with applets can be obtained from JaRRViS by any Hospital Information System (HIS) via standard queries. Such links can be sent via e-mail or included as text fields in any HIS database, providing direct access to the patient reports and images via standard web browsers.

  17. Uneasy Terrain: Image, Text, Landscape, and Contemporary Indigenous Artists in the United States

    ERIC Educational Resources Information Center

    Ohnesorge, Karen

    2008-01-01

    Like many contemporary Indigenous artists in the United States, Flathead artist Jaune Quick-to-See Smith seeks to clarify existing relationships among race, place, and economics as well as to create new relationships. In particular, she and her peers combine image and text to interrogate the genre of landscape painting as a stage for fantasies of…

  18. DICOM: a standard for medical imaging

    NASA Astrophysics Data System (ADS)

    Horii, Steven C.; Bidgood, W. Dean

    1993-01-01

    Since 1983, the American College of Radiology (ACR) and the National Electrical Manufacturers Association (NEMA) have been engaged in developing standards related to medical imaging. This alliance of users and manufacturers was formed to meet the needs of the medical imaging community as its use of digital imaging technology increased. The development of electronic picture archiving and communications systems (PACS), which could connect a number of medical imaging devices together in a network, led to the need for a standard interface and data structure for use on imaging equipment. Since medical image files tend to be very large and include much text information along with the image, the need for a fast, flexible, and extensible standard was quickly established. The ACR-NEMA Digital Imaging and Communications Standards Committee developed a standard which met these needs. The standard (ACR-NEMA 300-1988) was first published in 1985 and revised in 1988. It is increasingly available from equipment manufacturers. The current work of the ACR- NEMA Committee has been to extend the standard to incorporate direct network connection features, and build on standards work done by the International Standards Organization in its Open Systems Interconnection series. This new standard, called Digital Imaging and Communication in Medicine (DICOM), follows an object-oriented design methodology and makes use of as many existing internationally accepted standards as possible. This paper gives a brief overview of the requirements for communications standards in medical imaging, a history of the ACR-NEMA effort and what it has produced, and a description of the DICOM standard.

  19. Non-smoking male adolescents' reactions to cigarette warnings.

    PubMed

    Pepper, Jessica K; Cameron, Linda D; Reiter, Paul L; McRee, Annie-Laurie; Brewer, Noel T

    2013-01-01

    The U.S. Food and Drug Administration (FDA) is working to introduce new graphic warning labels for cigarette packages, the first change in cigarette warnings in more than 25 years. We sought to examine whether warnings discouraged participants from wanting to smoke and altered perceived likelihood of harms among adolescent males and whether these warning effects varied by age. A national sample of 386 non-smoking American males ages 11-17 participated in an online experiment during fall 2010. We randomly assigned participants to view warnings using a 2 × 2 between-subjects design. The warnings described a harm of smoking (addiction or lung cancer) using text only or text plus an image used on European cigarette package warnings. Analyses tested whether age moderated the warnings' impact on risk perceptions and smoking motivations. The warnings discouraged most adolescents from wanting to smoke, but lung cancer warnings discouraged them more than addiction warnings did (60% vs. 34% were "very much" discouraged, p<.001). Including an image had no effect on discouragement. The warnings affected several beliefs about the harms from smoking, and age moderated these effects. Adolescents said addiction was easier to imagine and more likely to happen to them than lung cancer. They also believed that their true likelihood of experiencing any harm was lower than what an expert would say. Our findings suggest that warnings focusing on lung cancer, rather than addiction, are more likely to discourage wanting to smoke among adolescent males and enhance their ability to imagine the harmful consequences of smoking. Including images on warnings had little effect on non-smoking male adolescents' discouragement or beliefs, though additional research on the effects of pictorial warnings for this at-risk population is needed as the FDA moves forward with developing new graphic labels.

  20. Figure Text Extraction in Biomedical Literature

    PubMed Central

    Kim, Daehyun; Yu, Hong

    2011-01-01

    Background Figures are ubiquitous in biomedical full-text articles, and they represent important biomedical knowledge. However, the sheer volume of biomedical publications has made it necessary to develop computational approaches for accessing figures. Therefore, we are developing the Biomedical Figure Search engine (http://figuresearch.askHERMES.org) to allow bioscientists to access figures efficiently. Since text frequently appears in figures, automatically extracting such text may assist the task of mining information from figures. Little research, however, has been conducted exploring text extraction from biomedical figures. Methodology We first evaluated an off-the-shelf Optical Character Recognition (OCR) tool on its ability to extract text from figures appearing in biomedical full-text articles. We then developed a Figure Text Extraction Tool (FigTExT) to improve the performance of the OCR tool for figure text extraction through the use of three innovative components: image preprocessing, character recognition, and text correction. We first developed image preprocessing to enhance image quality and to improve text localization. Then we adapted the off-the-shelf OCR tool on the improved text localization for character recognition. Finally, we developed and evaluated a novel text correction framework by taking advantage of figure-specific lexicons. Results/Conclusions The evaluation on 382 figures (9,643 figure texts in total) randomly selected from PubMed Central full-text articles shows that FigTExT performed with 84% precision, 98% recall, and 90% F1-score for text localization and with 62.5% precision, 51.0% recall and 56.2% F1-score for figure text extraction. When limiting figure texts to those judged by domain experts to be important content, FigTExT performed with 87.3% precision, 68.8% recall, and 77% F1-score. FigTExT significantly improved the performance of the off-the-shelf OCR tool we used, which on its own performed with 36.6% precision, 19.3% recall, and 25.3% F1-score for text extraction. In addition, our results show that FigTExT can extract texts that do not appear in figure captions or other associated text, further suggesting the potential utility of FigTExT for improving figure search. PMID:21249186

  1. Figure text extraction in biomedical literature.

    PubMed

    Kim, Daehyun; Yu, Hong

    2011-01-13

    Figures are ubiquitous in biomedical full-text articles, and they represent important biomedical knowledge. However, the sheer volume of biomedical publications has made it necessary to develop computational approaches for accessing figures. Therefore, we are developing the Biomedical Figure Search engine (http://figuresearch.askHERMES.org) to allow bioscientists to access figures efficiently. Since text frequently appears in figures, automatically extracting such text may assist the task of mining information from figures. Little research, however, has been conducted exploring text extraction from biomedical figures. We first evaluated an off-the-shelf Optical Character Recognition (OCR) tool on its ability to extract text from figures appearing in biomedical full-text articles. We then developed a Figure Text Extraction Tool (FigTExT) to improve the performance of the OCR tool for figure text extraction through the use of three innovative components: image preprocessing, character recognition, and text correction. We first developed image preprocessing to enhance image quality and to improve text localization. Then we adapted the off-the-shelf OCR tool on the improved text localization for character recognition. Finally, we developed and evaluated a novel text correction framework by taking advantage of figure-specific lexicons. The evaluation on 382 figures (9,643 figure texts in total) randomly selected from PubMed Central full-text articles shows that FigTExT performed with 84% precision, 98% recall, and 90% F1-score for text localization and with 62.5% precision, 51.0% recall and 56.2% F1-score for figure text extraction. When limiting figure texts to those judged by domain experts to be important content, FigTExT performed with 87.3% precision, 68.8% recall, and 77% F1-score. FigTExT significantly improved the performance of the off-the-shelf OCR tool we used, which on its own performed with 36.6% precision, 19.3% recall, and 25.3% F1-score for text extraction. In addition, our results show that FigTExT can extract texts that do not appear in figure captions or other associated text, further suggesting the potential utility of FigTExT for improving figure search.

  2. Strategies for de-identification and anonymization of electronic health record data for use in multicenter research studies.

    PubMed

    Kushida, Clete A; Nichols, Deborah A; Jadrnicek, Rik; Miller, Ric; Walsh, James K; Griffin, Kara

    2012-07-01

    De-identification and anonymization are strategies that are used to remove patient identifiers in electronic health record data. The use of these strategies in multicenter research studies is paramount in importance, given the need to share electronic health record data across multiple environments and institutions while safeguarding patient privacy. Systematic literature search using keywords of de-identify, deidentify, de-identification, deidentification, anonymize, anonymization, data scrubbing, and text scrubbing. Search was conducted up to June 30, 2011 and involved 6 different common literature databases. A total of 1798 prospective citations were identified, and 94 full-text articles met the criteria for review and the corresponding articles were obtained. Search results were supplemented by review of 26 additional full-text articles; a total of 120 full-text articles were reviewed. A final sample of 45 articles met inclusion criteria for review and discussion. Articles were grouped into text, images, and biological sample categories. For text-based strategies, the approaches were segregated into heuristic, lexical, and pattern-based systems versus statistical learning-based systems. For images, approaches that de-identified photographic facial images and magnetic resonance image data were described. For biological samples, approaches that managed the identifiers linked with these samples were discussed, particularly with respect to meeting the anonymization requirements needed for Institutional Review Board exemption under the Common Rule. Current de-identification strategies have their limitations, and statistical learning-based systems have distinct advantages over other approaches for the de-identification of free text. True anonymization is challenging, and further work is needed in the areas of de-identification of datasets and protection of genetic information.

  3. 36 CFR § 1194.21 - Software applications and operating systems.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... an image represents a program element, the information conveyed by the image must also be available in text. (e) When bitmap images are used to identify controls, status indicators, or other programmatic elements, the meaning assigned to those images shall be consistent throughout an application's...

  4. 36 CFR 1194.21 - Software applications and operating systems.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... an image represents a program element, the information conveyed by the image must also be available in text. (e) When bitmap images are used to identify controls, status indicators, or other programmatic elements, the meaning assigned to those images shall be consistent throughout an application's...

  5. 36 CFR 1194.21 - Software applications and operating systems.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... an image represents a program element, the information conveyed by the image must also be available in text. (e) When bitmap images are used to identify controls, status indicators, or other programmatic elements, the meaning assigned to those images shall be consistent throughout an application's...

  6. 36 CFR 1194.21 - Software applications and operating systems.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... an image represents a program element, the information conveyed by the image must also be available in text. (e) When bitmap images are used to identify controls, status indicators, or other programmatic elements, the meaning assigned to those images shall be consistent throughout an application's...

  7. 36 CFR 1194.21 - Software applications and operating systems.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... an image represents a program element, the information conveyed by the image must also be available in text. (e) When bitmap images are used to identify controls, status indicators, or other programmatic elements, the meaning assigned to those images shall be consistent throughout an application's...

  8. Development of a Hampton University Program for Novel Breast Cancer Imaging and Therapy Research

    DTIC Science & Technology

    2015-06-01

    student ( Nanda Karthik) involved…. Should be able to give you some text!]. Aim 2 Develop and test a practical method for application of a magnetic field ...a Department of Energy (DOE) nuclear physics research facility operated by Jefferson Science Associates LLC. Jefferson Lab resources for this...minimally affected by breast density because of the higher energy photons of 99mTc. In a recent study that included patients who had inconclusive

  9. "Visual Learning Is the Best Learning--It Lets You Be Creative while Learning": Exploring Ways to Begin Guided Writing in Second Language Learning through the Use of Comics

    ERIC Educational Resources Information Center

    Rossetto, Marietta; Chiera-Macchia, Antonella

    2011-01-01

    This study investigated the use of comics (Cary, 2004) in a guided writing experience in secondary school Italian language learning. The main focus of the peer group interaction task included the exploration of visual sequencing and visual integration (Bailey, O'Grady-Jones, & McGown, 1995) using image and text to create a comic strip narrative in…

  10. Robust digital image inpainting algorithm in the wireless environment

    NASA Astrophysics Data System (ADS)

    Karapetyan, G.; Sarukhanyan, H. G.; Agaian, S. S.

    2014-05-01

    Image or video inpainting is the process/art of retrieving missing portions of an image without introducing undesirable artifacts that are undetectable by an ordinary observer. An image/video can be damaged due to a variety of factors, such as deterioration due to scratches, laser dazzling effects, wear and tear, dust spots, loss of data when transmitted through a channel, etc. Applications of inpainting include image restoration (removing laser dazzling effects, dust spots, date, text, time, etc.), image synthesis (texture synthesis), completing panoramas, image coding, wireless transmission (recovery of the missing blocks), digital culture protection, image de-noising, fingerprint recognition, and film special effects and production. Most inpainting methods can be classified in two key groups: global and local methods. Global methods are used for generating large image regions from samples while local methods are used for filling in small image gaps. Each method has its own advantages and limitations. For example, the global inpainting methods perform well on textured image retrieval, whereas the classical local methods perform poorly. In addition, some of the techniques are computationally intensive; exceeding the capabilities of most currently used mobile devices. In general, the inpainting algorithms are not suitable for the wireless environment. This paper presents a new and efficient scheme that combines the advantages of both local and global methods into a single algorithm. Particularly, it introduces a blind inpainting model to solve the above problems by adaptively selecting support area for the inpainting scheme. The proposed method is applied to various challenging image restoration tasks, including recovering old photos, recovering missing data on real and synthetic images, and recovering the specular reflections in endoscopic images. A number of computer simulations demonstrate the effectiveness of our scheme and also illustrate the main properties and implementation steps of the presented algorithm. Furthermore, the simulation results show that the presented method is among the state-of-the-art and compares favorably against many available methods in the wireless environment. Robustness in the wireless environment with respect to the shape of the manually selected "marked" region is also illustrated. Currently, we are working on the expansion of this work to video and 3-D data.

  11. Experiments on Supervised Learning Algorithms for Text Categorization

    NASA Technical Reports Server (NTRS)

    Namburu, Setu Madhavi; Tu, Haiying; Luo, Jianhui; Pattipati, Krishna R.

    2005-01-01

    Modern information society is facing the challenge of handling massive volume of online documents, news, intelligence reports, and so on. How to use the information accurately and in a timely manner becomes a major concern in many areas. While the general information may also include images and voice, we focus on the categorization of text data in this paper. We provide a brief overview of the information processing flow for text categorization, and discuss two supervised learning algorithms, viz., support vector machines (SVM) and partial least squares (PLS), which have been successfully applied in other domains, e.g., fault diagnosis [9]. While SVM has been well explored for binary classification and was reported as an efficient algorithm for text categorization, PLS has not yet been applied to text categorization. Our experiments are conducted on three data sets: Reuter's- 21578 dataset about corporate mergers and data acquisitions (ACQ), WebKB and the 20-Newsgroups. Results show that the performance of PLS is comparable to SVM in text categorization. A major drawback of SVM for multi-class categorization is that it requires a voting scheme based on the results of pair-wise classification. PLS does not have this drawback and could be a better candidate for multi-class text categorization.

  12. Mapping DICOM to OpenDocument format

    NASA Astrophysics Data System (ADS)

    Yu, Cong; Yao, Zhihong

    2009-02-01

    In order to enhance the readability, extensibility and sharing of DICOM files, we have introduced XML into DICOM file system (SPIE Volume 5748)[1] and the multilayer tree structure into DICOM (SPIE Volume 6145)[2]. In this paper, we proposed mapping DICOM to ODF(OpenDocument Format), for it is also based on XML. As a result, the new format realizes the separation of content(including text content and image) and display style. Meanwhile, since OpenDocument files take the format of a ZIP compressed archive, the new kind of DICOM files can benefit from ZIP's lossless compression to reduce file size. Moreover, this open format can also guarantee long-term access to data without legal or technical barriers, making medical images accessible to various fields.

  13. Offline Arabic handwriting recognition: a survey.

    PubMed

    Lorigo, Liana M; Govindaraju, Venu

    2006-05-01

    The automatic recognition of text on scanned images has enabled many applications such as searching for words in large volumes of documents, automatic sorting of postal mail, and convenient editing of previously printed documents. The domain of handwriting in the Arabic script presents unique technical challenges and has been addressed more recently than other domains. Many different methods have been proposed and applied to various types of images. This paper provides a comprehensive review of these methods. It is the first survey to focus on Arabic handwriting recognition and the first Arabic character recognition survey to provide recognition rates and descriptions of test data for the approaches discussed. It includes background on the field, discussion of the methods, and future research directions.

  14. Cascaded Segmentation-Detection Networks for Word-Level Text Spotting.

    PubMed

    Qin, Siyang; Manduchi, Roberto

    2017-11-01

    We introduce an algorithm for word-level text spotting that is able to accurately and reliably determine the bounding regions of individual words of text "in the wild". Our system is formed by the cascade of two convolutional neural networks. The first network is fully convolutional and is in charge of detecting areas containing text. This results in a very reliable but possibly inaccurate segmentation of the input image. The second network (inspired by the popular YOLO architecture) analyzes each segment produced in the first stage, and predicts oriented rectangular regions containing individual words. No post-processing (e.g. text line grouping) is necessary. With execution time of 450 ms for a 1000 × 560 image on a Titan X GPU, our system achieves good performance on the ICDAR 2013, 2015 benchmarks [2], [1].

  15. A two-stage rule-constrained seedless region growing approach for mandibular body segmentation in MRI.

    PubMed

    Ji, Dong Xu; Foong, Kelvin Weng Chiong; Ong, Sim Heng

    2013-09-01

    Extraction of the mandible from 3D volumetric images is frequently required for surgical planning and evaluation. Image segmentation from MRI is more complex than CT due to lower bony signal-to-noise. An automated method to extract the human mandible body shape from magnetic resonance (MR) images of the head was developed and tested. Anonymous MR images data sets of the head from 12 subjects were subjected to a two-stage rule-constrained region growing approach to derive the shape of the body of the human mandible. An initial thresholding technique was applied followed by a 3D seedless region growing algorithm to detect a large portion of the trabecular bone (TB) regions of the mandible. This stage is followed with a rule-constrained 2D segmentation of each MR axial slice to merge the remaining portions of the TB regions with lower intensity levels. The two-stage approach was replicated to detect the cortical bone (CB) regions of the mandibular body. The TB and CB regions detected from the preceding steps were merged and subjected to a series of morphological processes for completion of the mandibular body region definition. Comparisons of the accuracy of segmentation between the two-stage approach, conventional region growing method, 3D level set method, and manual segmentation were made with Jaccard index, Dice index, and mean surface distance (MSD). The mean accuracy of the proposed method is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The mean accuracy of CRG is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The mean accuracy of the 3D level set method is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The proposed method shows improvement in accuracy over CRG and 3D level set. Accurate segmentation of the body of the human mandible from MR images is achieved with the proposed two-stage rule-constrained seedless region growing approach. The accuracy achieved with the two-stage approach is higher than CRG and 3D level set.

  16. Optics design for J-TEXT ECE imaging with field curvature adjustment lens.

    PubMed

    Zhu, Y; Zhao, Z; Liu, W D; Xie, J; Hu, X; Muscatello, C M; Domier, C W; Luhmann, N C; Chen, M; Ren, X; Tobias, B J; Zhuang, G; Yang, Z

    2014-11-01

    Significant progress has been made in the imaging and visualization of magnetohydrodynamic and microturbulence phenomena in magnetic fusion plasmas. Of particular importance has been microwave electron cyclotron emission imaging (ECEI) for imaging Te fluctuations. Key to the success of ECEI is a large Gaussian optics system constituting a major portion of the focusing of the microwave radiation from the plasma to the detector array. Both the spatial resolution and observation range are dependent upon the imaging optics system performance. In particular, it is critical that the field curvature on the image plane is reduced to decrease crosstalk between vertical channels. The receiver optics systems for two ECEI on the J-TEXT device have been designed to ameliorate these problems and provide good performance with additional field curvature adjustment lenses with a meniscus shape to correct the aberrations from several spherical surfaces.

  17. Novel grid-based optical Braille conversion: from scanning to wording

    NASA Astrophysics Data System (ADS)

    Yoosefi Babadi, Majid; Jafari, Shahram

    2011-12-01

    Grid-based optical Braille conversion (GOBCO) is explained in this article. The grid-fitting technique involves processing scanned images taken from old hard-copy Braille manuscripts, recognising and converting them into English ASCII text documents inside a computer. The resulted words are verified using the relevant dictionary to provide the final output. The algorithms employed in this article can be easily modified to be implemented on other visual pattern recognition systems and text extraction applications. This technique has several advantages including: simplicity of the algorithm, high speed of execution, ability to help visually impaired persons and blind people to work with fax machines and the like, and the ability to help sighted people with no prior knowledge of Braille to understand hard-copy Braille manuscripts.

  18. From Image Analysis to Computer Vision: Motives, Methods, and Milestones.

    DTIC Science & Technology

    1998-07-01

    images. Initially, work on digital image analysis dealt with specific classes of images such as text, photomicrographs, nuclear particle tracks, and aerial...photographs; but by the 1960’s, general algorithms and paradigms for image analysis began to be formulated. When the artificial intelligence...scene, but eventually from image sequences obtained by a moving camera; at this stage, image analysis had become scene analysis or computer vision

  19. Training the intelligent eye: understanding illustrations in early modern astronomy texts.

    PubMed

    Crowther, Kathleen M; Barker, Peter

    2013-09-01

    Throughout the early modern period, the most widely read astronomical textbooks were Johannes de Sacrobosco's De sphaera and the Theorica planetarum, ultimately in the new form introduced by Georg Peurbach. This essay argues that the images in these texts were intended to develop an "intelligent eye." Students were trained to transform representations of specific heavenly phenomena into moving mental images of the structure of the cosmos. Only by learning the techniques of mental visualization and manipulation could the student "see" in the mind's eye the structure and motions of the cosmos. While anyone could look up at the heavens, only those who had acquired the intelligent eye could comprehend the divinely created order of the universe. Further, the essay demonstrates that the visual program of the Sphaera and Theorica texts played a significant and hitherto unrecognized role in later scientific work. Copernicus, Galileo, and Kepler all utilized the same types of images in their own texts to explicate their ideas about the cosmos.

  20. The informatics of a C57BL/6J mouse brain atlas.

    PubMed

    MacKenzie-Graham, Allan; Jones, Eagle S; Shattuck, David W; Dinov, Ivo D; Bota, Mihail; Toga, Arthur W

    2003-01-01

    The Mouse Atlas Project (MAP) aims to produce a framework for organizing and analyzing the large volumes of neuroscientific data produced by the proliferation of genetically modified animals. Atlases provide an invaluable aid in understanding the impact of genetic manipulations by providing a standard for comparison. We use a digital atlas as the hub of an informatics network, correlating imaging data, such as structural imaging and histology, with text-based data, such as nomenclature, connections, and references. We generated brain volumes using magnetic resonance microscopy (MRM), classical histology, and immunohistochemistry, and registered them into a common and defined coordinate system. Specially designed viewers were developed in order to visualize multiple datasets simultaneously and to coordinate between textual and image data. Researchers can navigate through the brain interchangeably, in either a text-based or image-based representation that automatically updates information as they move. The atlas also allows the independent entry of other types of data, the facile retrieval of information, and the straight-forward display of images. In conjunction with centralized servers, image and text data can be kept current and can decrease the burden on individual researchers' computers. A comprehensive framework that encompasses many forms of information in the context of anatomic imaging holds tremendous promise for producing new insights. The atlas and associated tools can be found at http://www.loni.ucla.edu/MAP.

  1. An analysis of absorbing image on the Indonesian text by using color matching

    NASA Astrophysics Data System (ADS)

    Hutagalung, G. A.; Tulus; Iryanto; Lubis, Y. F. A.; Khairani, M.; Suriati

    2018-03-01

    The insertion of messages in an image is performed by inserting per character message in some pixels. One way of inserting a message into an image is by inserting the ASCII decimal value of a character to the decimal value of the primary color of the image. Messages that use characters in letters, numbers or symbols, where the use of letters of each word is different in number and frequency of use, as well as the use of letters in various messages within each language. In Indonesian language, the use of the letter A to be the most widely used, and the use of other letters greatly affect the clarity of a message or text presented in the language. This study aims to determine the capacity to absorb the message in Indonesian language from an image and what are the things that affect the difference. The data used in this study consists of several images in JPG or JPEG format can be obtained from the image drawing software or hardware of the image makers at different image sizes. The results of testing on four samples of a color image have been obtained by using an image size of 1200 X 1920.

  2. Performance Evaluation of Frequency Transform Based Block Classification of Compound Image Segmentation Techniques

    NASA Astrophysics Data System (ADS)

    Selwyn, Ebenezer Juliet; Florinabel, D. Jemi

    2018-04-01

    Compound image segmentation plays a vital role in the compression of computer screen images. Computer screen images are images which are mixed with textual, graphical, or pictorial contents. In this paper, we present a comparison of two transform based block classification of compound images based on metrics like speed of classification, precision and recall rate. Block based classification approaches normally divide the compound images into fixed size blocks of non-overlapping in nature. Then frequency transform like Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are applied over each block. Mean and standard deviation are computed for each 8 × 8 block and are used as features set to classify the compound images into text/graphics and picture/background block. The classification accuracy of block classification based segmentation techniques are measured by evaluation metrics like precision and recall rate. Compound images of smooth background and complex background images containing text of varying size, colour and orientation are considered for testing. Experimental evidence shows that the DWT based segmentation provides significant improvement in recall rate and precision rate approximately 2.3% than DCT based segmentation with an increase in block classification time for both smooth and complex background images.

  3. Picture This

    ERIC Educational Resources Information Center

    Dillon, Robert

    2012-01-01

    Images can be powerful tools for change, but without compelling images of the future of education, everyone will be forced to use the images of the past. The problem is that schools can't be built on old images: they must reflect current best practices that infuse technology, relationships, background knowledge, culturally responsive texts, and…

  4. Librarian's Image in Children's Fiction.

    ERIC Educational Resources Information Center

    Kitchen, Barbara

    The image of the librarian has engendered much discussion among professional librarians. Children's fiction and picture books are good mediums in which to examine the image of the librarian, since they provide impressionable children some of their earliest cultural knowledge. Children's authors can supply powerful images by means of text and…

  5. Machine printed text and handwriting identification in noisy document images.

    PubMed

    Zheng, Yefeng; Li, Huiping; Doermann, David

    2004-03-01

    In this paper, we address the problem of the identification of text in noisy document images. We are especially focused on segmenting and identifying between handwriting and machine printed text because: 1) Handwriting in a document often indicates corrections, additions, or other supplemental information that should be treated differently from the main content and 2) the segmentation and recognition techniques requested for machine printed and handwritten text are significantly different. A novel aspect of our approach is that we treat noise as a separate class and model noise based on selected features. Trained Fisher classifiers are used to identify machine printed text and handwriting from noise and we further exploit context to refine the classification. A Markov Random Field-based (MRF) approach is used to model the geometrical structure of the printed text, handwriting, and noise to rectify misclassifications. Experimental results show that our approach is robust and can significantly improve page segmentation in noisy document collections.

  6. The Rosetta phone: a hand-held device for automatic translation of signs in natural images

    NASA Astrophysics Data System (ADS)

    Jafri, Syed Ali Raza; Mikkilineni, Aravind K.; Boutin, Mireille; Delp, Edward J.

    2008-02-01

    When traveling in a region where the local language is not written using the Roman alphabet, translating written text (e.g., documents, road signs, or placards) is a particularly difficult problem since the text cannot be easily entered into a translation device or searched using a dictionary. To address this problem, we are developing the "Rosetta Phone," a handheld device (e.g., PDA or mobile telephone) capable of acquiring a picture of the text, identifying the text within the image, and producing both an audible and a visual English interpretation of the text. We started with English, as a developement language, for which we achieved close to 100% accuracy in identifying and reading text. We then modified the system to be able to read and translate words written using the Arabic character set. We currently achieve approximately 95% accuracy in reading words from a small directory of town names.

  7. Assessing treatment response in triple-negative breast cancer from quantitative image analysis in perfusion magnetic resonance imaging.

    PubMed

    Banerjee, Imon; Malladi, Sadhika; Lee, Daniela; Depeursinge, Adrien; Telli, Melinda; Lipson, Jafi; Golden, Daniel; Rubin, Daniel L

    2018-01-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is sensitive but not specific to determining treatment response in early stage triple-negative breast cancer (TNBC) patients. We propose an efficient computerized technique for assessing treatment response, specifically the residual tumor (RT) status and pathological complete response (pCR), in response to neoadjuvant chemotherapy. The proposed approach is based on Riesz wavelet analysis of pharmacokinetic maps derived from noninvasive DCE-MRI scans, obtained before and after treatment. We compared the performance of Riesz features with the traditional gray level co-occurrence matrices and a comprehensive characterization of the lesion that includes a wide range of quantitative features (e.g., shape and boundary). We investigated a set of predictive models ([Formula: see text]) incorporating distinct combinations of quantitative characterizations and statistical models at different time points of the treatment and some area under the receiver operating characteristic curve (AUC) values we reported are above 0.8. The most efficient models are based on first-order statistics and Riesz wavelets, which predicted RT with an AUC value of 0.85 and pCR with an AUC value of 0.83, improving results reported in a previous study by [Formula: see text]. Our findings suggest that Riesz texture analysis of TNBC lesions can be considered a potential framework for optimizing TNBC patient care.

  8. Use of a web-based image reporting and tracking system for assessing abdominal imaging examination quality issues in a single practice.

    PubMed

    Rosenkrantz, Andrew B; Johnson, Evan; Sanger, Joseph J

    2015-10-01

    This article presents our local experience in the implementation of a real-time web-based system for reporting and tracking quality issues relating to abdominal imaging examinations. This system allows radiologists to electronically submit examination quality issues during clinical readouts. The submitted information is e-mailed to a designate for the given modality for further follow-up; the designate may subsequently enter text describing their response or action taken, which is e-mailed back to the radiologist. Review of 558 entries over a 6-year period demonstrated documentation of a broad range of examination quality issues, including specific issues relating to protocol deviation, post-processing errors, positioning errors, artifacts, and IT concerns. The most common issues varied among US, CT, MRI, radiography, and fluoroscopy. In addition, the most common issues resulting in a patient recall for repeat imaging (generally related to protocol deviation in MRI and US) were identified. In addition to submitting quality problems, radiologists also commonly used the tool to provide recognition of a well-performed examination. An electronic log of actions taken in response to radiologists' submissions indicated that both positive and negative feedback were commonly communicated to the performing technologist. Information generated using the tool can be used to guide subsequent quality improvement initiatives within a practice, including continued protocol standardization as well as education of technologists in the optimization of abdominal imaging examinations.

  9. Does 3-dimensional imaging of the third molar reduce the risk of experiencing inferior alveolar nerve injury owing to extraction?: A meta-analysis.

    PubMed

    Clé-Ovejero, Adrià; Sánchez-Torres, Alba; Camps-Font, Octavi; Gay-Escoda, Cosme; Figueiredo, Rui; Valmaseda-Castellón, Eduard

    2017-08-01

    Clinicians generally use panoramic radiographic (PR) images to assess the proximity of the mandibular third molar to the inferior alveolar nerve (IAN). However, in cases in which a patient needs to undergo a third-molar extraction, many clinicians also assess computed tomographic (CT) images to prevent nerve damage. Two of the authors independently searched MEDLINE (through PubMed), Cochrane Library, Scopus, and Ovid. The authors included randomized or nonrandomized longitudinal studies whose investigators had compared the number of IAN injuries after third-molar extraction in patients who had undergone preoperative CT with patients who had undergone only PR. The authors analyzed the full text of 26 of the 745 articles they initially selected. They included 6 studies in the meta-analysis. Four of the studies had a high risk of bias, and the investigators of only 1 study had used blinding with the patients. The authors observed no statistically significant differences between groups related to the total number of nerve injuries (risk ratio, 0.96; 95% confidence interval, 0.50 to 1.85; P = .91). The prognosis of the injuries was similar for both groups. Although having preoperative CT images might be useful for clinicians in terms of diagnosing and extracting mandibular third molars, having these CT images does not reduce patients' risk of experiencing IAN injuries nor does it affect their prognosis. Copyright © 2017 American Dental Association. Published by Elsevier Inc. All rights reserved.

  10. Effects of Image-Based and Text-Based Active Learning Exercises on Student Examination Performance in a Musculoskeletal Anatomy Course

    ERIC Educational Resources Information Center

    Gross, M. Melissa; Wright, Mary C.; Anderson, Olivia S.

    2017-01-01

    Research on the benefits of visual learning has relied primarily on lecture-based pedagogy, but the potential benefits of combining active learning strategies with visual and verbal materials on learning anatomy has not yet been explored. In this study, the differential effects of text-based and image-based active learning exercises on examination…

  11. The Image of the Negro in Deep South Public School State History Texts.

    ERIC Educational Resources Information Center

    McLaurin, Melton

    This report reviews the image portrayed of the Negro, in textbooks used in the deep South. Slavery is painted as a cordial, humane system under kindly masters and the Negro as docile and childlike. Although the treatment of the modern era is relatively more objective, the texts, on the whole, evade treatment of the Civil Rights struggle, violence,…

  12. [Application of text mining approach to pre-education prior to clinical practice].

    PubMed

    Koinuma, Masayoshi; Koike, Katsuya; Nakamura, Hitoshi

    2008-06-01

    We developed a new survey analysis technique to understand students' actual aims for effective pretraining prior to clinical practice. We asked third-year undergraduate students to write fixed-style complete and free sentences on "preparation of drug dispensing." Then, we converted their sentence data in to text style and performed Japanese-language morphologic analysis on the data using language analysis software. We classified key words, which were created on the basis of the word class information of the Japanese language morphologic analysis, into categories based on causes and characteristics. In addition to this, we classified the characteristics into six categories consisting of those concepts including "knowledge," "skill and attitude," "image," etc. with the KJ method technique. The results showed that the awareness of students of "preparation of drug dispensing" tended to be approximately three-fold more frequent in "skill and attitude," "risk," etc. than in "knowledge." Regarding the characteristics in the category of the "image," words like "hard," "challenging," "responsibility," "life," etc. frequently occurred. The results of corresponding analysis showed that the characteristics of the words "knowledge" and "skills and attitude" were independent. As the result of developing a cause-and-effect diagram, it was demonstrated that the phase "hanging tough" described most of the various factors. We thus could understand students' actual feelings by applying text-mining as a new survey analysis technique.

  13. A comparison of three types of stimulus material in undergraduate mental health nursing education.

    PubMed

    Stone, Teresa E; Levett-Jones, Tracy

    2014-04-01

    The paper discusses an innovative educational approach that compared the use of different textual forms as stimulus materials in the teaching of an introductory mental health course. Practitioners in many disciplines, including nursing, appreciate the value of narratives in making sense of experiences, challenging assumptions and enhancing learning: they enable exploration of reality from different perspectives and create an emotional resonance. Narratives help nursing students to uncover embedded meanings, values and beliefs; they can include written texts, illustrated texts or picture books. 180 students enrolled in an elective undergraduate nursing course. This project afforded students the choice of critically analysing (a) a chapter from one of two autobiographies, (b) an illustrated text, or (c) an illustration from a picture book. Each text was a narrative account from a personal or carer's perspective of the experience of mental illness. Their written submissions were then analysed by means of a qualitative descriptive approach. In analysis of the autobiographies students tended to paraphrase the authors' words and summarise their experiences. Those choosing the illustrated text were able to link the images and text, and provide a deeper and more insightful level of interpretation, albeit influenced by the author's personal account and expressed emotions; however, those analysing a picture book illustration demonstrated a surprising level of critical and creative thinking, and their interpretations were empathetic, insightful and thoughtful. The use of picture books, although not a common approach in nursing education, appears to engage students, challenge them to think more deeply, and stimulate their imagination. © 2013.

  14. The Ecological Approach to Text Visualization.

    ERIC Educational Resources Information Center

    Wise, James A.

    1999-01-01

    Presents both theoretical and technical bases on which to build a "science of text visualization." The Spatial Paradigm for Information Retrieval and Exploration (SPIRE) text-visualization system, which images information from free-text documents as natural terrains, serves as an example of the "ecological approach" in its visual metaphor, its…

  15. Two-photon microscopy measurement of cerebral metabolic rate of oxygen using periarteriolar oxygen concentration gradients.

    PubMed

    Sakadžić, Sava; Yaseen, Mohammad A; Jaswal, Rajeshwer; Roussakis, Emmanuel; Dale, Anders M; Buxton, Richard B; Vinogradov, Sergei A; Boas, David A; Devor, Anna

    2016-10-01

    The cerebral metabolic rate of oxygen ([Formula: see text]) is an essential parameter for evaluating brain function and pathophysiology. However, the currently available approaches for quantifying [Formula: see text] rely on complex multimodal imaging and mathematical modeling. Here, we introduce a method that allows estimation of [Formula: see text] based on a single measurement modality-two-photon imaging of the partial pressure of oxygen ([Formula: see text]) in cortical tissue. We employed two-photon phosphorescence lifetime microscopy (2PLM) and the oxygen-sensitive nanoprobe PtP-C343 to map the tissue [Formula: see text] distribution around cortical penetrating arterioles. [Formula: see text] is subsequently estimated by fitting the changes of tissue [Formula: see text] around arterioles with the Krogh cylinder model of oxygen diffusion. We measured the baseline [Formula: see text] in anesthetized rats and modulated tissue [Formula: see text] levels by manipulating the depth of anesthesia. This method provides [Formula: see text] measurements localized within [Formula: see text] and it may provide oxygen consumption measurements in individual cortical layers or within confined cortical regions, such as in ischemic penumbra and the foci of functional activation.

  16. The king's animals and the king's books: the illustrations for the Paris Academy's Histoire des animaux.

    PubMed

    Guerrini, Anita

    2010-07-01

    This essay explores the place of natural philosophy among the patronage projects of Louis XIV, focusing on the Mémoires pour servir a l'histoire naturelle des animaux (or Histoire des animaux) of the 1670s, one of a number of works of natural philosophy to issue from Louis XIV's printing house. Questions particular to the Histoire des animaux include the interaction between text and image, the credibility and authority of images of exotic animals, and the relationship between comparative anatomy and natural history, and between human and animal anatomy. At the same time that the Histoire des animaux contributed to Jean-Baptiste Colbert's management of patronage and of Louis's image, it was a work of natural philosophy, representing the collaborative efforts of the new Paris Academy of Sciences. It examined natural history and comparative anatomy in new ways, and its illustrations broke new ground in their depiction of animals in a natural setting. However, the lavishly formatted books were presentation volumes and did not gain wide circulation until their republication in 1733. Sources consulted include Colbert's manuscript memoires on the royal printers and engravers.

  17. Continuing Medical Education Speakers with High Evaluation Scores Use more Image-based Slides.

    PubMed

    Ferguson, Ian; Phillips, Andrew W; Lin, Michelle

    2017-01-01

    Although continuing medical education (CME) presentations are common across health professions, it is unknown whether slide design is independently associated with audience evaluations of the speaker. Based on the conceptual framework of Mayer's theory of multimedia learning, this study aimed to determine whether image use and text density in presentation slides are associated with overall speaker evaluations. This retrospective analysis of six sequential CME conferences (two annual emergency medicine conferences over a three-year period) used a mixed linear regression model to assess whether post-conference speaker evaluations were associated with image fraction (percentage of image-based slides per presentation) and text density (number of words per slide). A total of 105 unique lectures were given by 49 faculty members, and 1,222 evaluations (70.1% response rate) were available for analysis. On average, 47.4% (SD=25.36) of slides had at least one educationally-relevant image (image fraction). Image fraction significantly predicted overall higher evaluation scores [F(1, 100.676)=6.158, p=0.015] in the mixed linear regression model. The mean (SD) text density was 25.61 (8.14) words/slide but was not a significant predictor [F(1, 86.293)=0.55, p=0.815]. Of note, the individual speaker [χ 2 (1)=2.952, p=0.003] and speaker seniority [F(3, 59.713)=4.083, p=0.011] significantly predicted higher scores. This is the first published study to date assessing the linkage between slide design and CME speaker evaluations by an audience of practicing clinicians. The incorporation of images was associated with higher evaluation scores, in alignment with Mayer's theory of multimedia learning. Contrary to this theory, however, text density showed no significant association, suggesting that these scores may be multifactorial. Professional development efforts should focus on teaching best practices in both slide design and presentation skills.

  18. Interpreting comprehensive two-dimensional gas chromatography using peak topography maps with application to petroleum forensics.

    PubMed

    Ghasemi Damavandi, Hamidreza; Sen Gupta, Ananya; Nelson, Robert K; Reddy, Christopher M

    2016-01-01

    Comprehensive two-dimensional gas chromatography [Formula: see text] provides high-resolution separations across hundreds of compounds in a complex mixture, thus unlocking unprecedented information for intricate quantitative interpretation. We exploit this compound diversity across the [Formula: see text] topography to provide quantitative compound-cognizant interpretation beyond target compound analysis with petroleum forensics as a practical application. We focus on the [Formula: see text] topography of biomarker hydrocarbons, hopanes and steranes, as they are generally recalcitrant to weathering. We introduce peak topography maps (PTM) and topography partitioning techniques that consider a notably broader and more diverse range of target and non-target biomarker compounds compared to traditional approaches that consider approximately 20 biomarker ratios. Specifically, we consider a range of 33-154 target and non-target biomarkers with highest-to-lowest peak ratio within an injection ranging from 4.86 to 19.6 (precise numbers depend on biomarker diversity of individual injections). We also provide a robust quantitative measure for directly determining "match" between samples, without necessitating training data sets. We validate our methods across 34 [Formula: see text] injections from a diverse portfolio of petroleum sources, and provide quantitative comparison of performance against established statistical methods such as principal components analysis (PCA). Our data set includes a wide range of samples collected following the 2010 Deepwater Horizon disaster that released approximately 160 million gallons of crude oil from the Macondo well (MW). Samples that were clearly collected following this disaster exhibit statistically significant match [Formula: see text] using PTM-based interpretation against other closely related sources. PTM-based interpretation also provides higher differentiation between closely correlated but distinct sources than obtained using PCA-based statistical comparisons. In addition to results based on this experimental field data, we also provide extentive perturbation analysis of the PTM method over numerical simulations that introduce random variability of peak locations over the [Formula: see text] biomarker ROI image of the MW pre-spill sample (sample [Formula: see text] in Additional file 4: Table S1). We compare the robustness of the cross-PTM score against peak location variability in both dimensions and compare the results against PCA analysis over the same set of simulated images. Detailed description of the simulation experiment and discussion of results are provided in Additional file 1: Section S8. We provide a peak-cognizant informational framework for quantitative interpretation of [Formula: see text] topography. Proposed topographic analysis enables [Formula: see text] forensic interpretation across target petroleum biomarkers, while including the nuances of lesser-known non-target biomarkers clustered around the target peaks. This allows potential discovery of hitherto unknown connections between target and non-target biomarkers.

  19. Real-time text extraction based on the page layout analysis system

    NASA Astrophysics Data System (ADS)

    Soua, M.; Benchekroun, A.; Kachouri, R.; Akil, M.

    2017-05-01

    Several approaches were proposed in order to extract text from scanned documents. However, text extraction in heterogeneous documents stills a real challenge. Indeed, text extraction in this context is a difficult task because of the variation of the text due to the differences of sizes, styles and orientations, as well as to the complexity of the document region background. Recently, we have proposed the improved hybrid binarization based on Kmeans method (I-HBK)5 to extract suitably the text from heterogeneous documents. In this method, the Page Layout Analysis (PLA), part of the Tesseract OCR engine, is used to identify text and image regions. Afterwards our hybrid binarization is applied separately on each kind of regions. In one side, gamma correction is employed before to process image regions. In the other side, binarization is performed directly on text regions. Then, a foreground and background color study is performed to correct inverted region colors. Finally, characters are located from the binarized regions based on the PLA algorithm. In this work, we extend the integration of the PLA algorithm within the I-HBK method. In addition, to speed up the separation of text and image step, we employ an efficient GPU acceleration. Through the performed experiments, we demonstrate the high F-measure accuracy of the PLA algorithm reaching 95% on the LRDE dataset. In addition, we illustrate the sequential and the parallel compared PLA versions. The obtained results give a speedup of 3.7x when comparing the parallel PLA implementation on GPU GTX 660 to the CPU version.

  20. Anatomy of the infant head

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosma, J.F.

    1986-01-01

    This text is mainly an atlas of illustration representing the dissection of the head and upper neck of the infant. It was prepared by the author over a 20-year period. The commentary compares the anatomy of the near-term infant with that of a younger fetus, child, and adult. As the author indicates, the dearth of anatomic information about postnatal anatomic changes represents a considerable handicap to those imaging infants. In part 1 of the book, anatomy is related to physiologic performance involving the pharynx, larynx, and mouth. Sequential topics involve the regional anatomy of the head (excluding the brain), themore » skeleton of the cranium, the nose, orbit, mouth, larynx, pharynx, and ear. To facilitate use of this text as a reference, the illustrations and text on individual organs are considered separately (i.e., the nose, the orbit, the eye, the mouth, the larynx, the pharynx, and the ear). Each part concerned with a separate organ includes materials from the regional illustrations contained in part 2 and from the skeleton, which is treated in part 3. Also included in a summary of the embryologic and fetal development of the organ.« less

  1. Surgical criteria for femoroacetabular impingement syndrome: a scoping review.

    PubMed

    Peters, Scott; Laing, Alisha; Emerson, Courtney; Mutchler, Kelsey; Joyce, Thomas; Thorborg, Kristian; Hölmich, Per; Reiman, Michael

    2017-11-01

    The purpose of this review was to analyse and report criteria used for open and arthroscopic surgical treatment of femoroacetabular impingement syndrome (FAIS). A librarian-assisted computer search of Medline, CINAHL and Embase for studies related to criterion for FAIS surgery was used in this study. Inclusion criteria included studies with the primary purpose of surgery or surgical outcomes for treatment of FAIS with and without labral tear, and reporting criteria for FAIS surgery. Diagnostic imaging was a criterion for surgery in 92% of the included studies, with alpha angle the most frequently reported (68% of studies) criterion. Reporting of symptoms was a criterion for surgery in 75%, and special tests a criterion in 70% of studies. Range-of-motion limitations were only a required criterion in 30%, only 12% of studies required intra-articular injection and 44% of studies described previously failed treatment (non-surgical or physiotherapist-led rehabilitation) as a criterion for surgery. Only 56% of included studies utilised the combination of symptoms, clinical signs and diagnostic imaging combined for diagnosis of FAIS as suggested by the Warwick Agreement on FAIS meeting. Diagnostic imaging evidence of FAIS was the most commonly reported criterion for surgery. Only 56% of included studies utilised the combination of symptoms, clinical signs and diagnostic imaging for diagnosis of FAIS as suggested by the Warwick Agreement on FAIS meeting, and only 44% of studies had failed non-surgical treatment (and 18% a failed trial of physiotherapy) as a criterion for surgery. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  2. #fitspo on Instagram: A mixed-methods approach using Netlytic and photo analysis, uncovering the online discussion and author/image characteristics.

    PubMed

    Santarossa, Sara; Coyne, Paige; Lisinski, Carly; Woodruff, Sarah J

    2016-11-01

    The #fitspo 'tag' is a recent trend on Instagram, which is used on posts to motivate others towards a healthy lifestyle through exercise/eating habits. This study used a mixed-methods approach consisting of text and network analysis via the Netlytic program ( N = 10,000 #fitspo posts), and content analysis of #fitspo images ( N = 122) was used to examine author and image characteristics. Results suggest that #fitspo posts may motivate through appearance-mediated themes, as the largest content categories (based on the associated text) were 'feeling good' and 'appearance'. Furthermore, #fitspo posts may create peer influence/support as personal (opposed to non-personal) accounts were associated with higher popularity of images (i.e. number of likes/followers). Finally, most images contained posed individuals with some degree of objectification.

  3. MultiFacet: A Faceted Interface for Browsing Large Multimedia Collections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henry, Michael J.; Hampton, Shawn D.; Endert, Alexander

    2013-10-31

    Faceted browsing is a common technique for exploring collections where the data can be grouped into a number of pre-defined categories, most often generated from textual metadata. Historically, faceted browsing has been applied to a single data type such as text or image data. However, typical collections contain multiple data types, such as information from web pages that contain text, images, and video. Additionally, when browsing a collection of images and video, facets are often created based on the metadata which may be incomplete, inaccurate, or missing altogether instead of the actual visual content contained within those images and video.more » In this work we address these limitations by presenting MultiFacet, a faceted browsing interface that supports multiple data types. MultiFacet constructs facets for images and video in a collection from the visual content using computer vision techniques. These visual facets can then be browsed in conjunction with text facets within a single interface to reveal relationships and phenomena within multimedia collections. Additionally, we present a use case based on real-world data, demonstrating the utility of this approach towards browsing a large multimedia data collection.« less

  4. Machine learning and radiology.

    PubMed

    Wang, Shijun; Summers, Ronald M

    2012-07-01

    In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers. Copyright © 2012. Published by Elsevier B.V.

  5. Contesting nonfiction: Fourth graders making sense of words and images in science information book discussions

    NASA Astrophysics Data System (ADS)

    Belfatti, Monica A.

    Recently developed common core standards echo calls by educators for ensuring that upper elementary students become proficient readers of informational texts. Informational texts have been theorized as causing difficulty for students because they contain linguistic and visual features different from more familiar narrative genres (Lemke, 2004). It has been argued that learning to read informational texts, particularly those with science subject matter, requires making sense of words, images, and the relationships among them (Pappas, 2006). Yet, conspicuously absent in the research are empirical studies documenting ways students make use of textual resources to build textual and conceptual understandings during classroom literacy instruction. This 10-month practitioner research study was designed to investigate the ways a group of ethnically and linguistically diverse fourth graders in one metropolitan school made sense of science information books during dialogically organized literature discussions. In this nontraditional instructional context, I wondered whether and how young students might make use of science informational text features, both words and images, in the midst of collaborative textual and conceptual inquiry. Drawing on methods of constructivist grounded theory and classroom discourse analysis, I analyzed student and teacher talk in 25 discussions of earth and life science books. Digital voice recordings and transcriptions served as the main data sources for this study. I found that, without teacher prompts or mandates to do so, fourth graders raised a wide range of textual and conceptual inquiries about words, images, scientific figures, and phenomena. In addition, my analysis yielded a typology of ways students constructed relationships between words and images within and across page openings of the information books read for their sense-making endeavors. The diversity of constructed word-image relationships aided students in raising, exploring, and contesting textual and conceptual ideas. Moreover, through their joint inquiries, students marshaled and evaluated a rich array of resources. Students' sense-making of information books was not contained by the words and images alone; it involved a situated, complex process of making sense of multiple texts, discourses, and epistemologies. These findings suggest educators, theorists, and policy makers reconsider acontextual, linear, hierarchical models for developing elementary students as sense-makers of nonfiction.

  6. Search for long-lived heavy charged particles using a ring imaging Cherenkov technique at LHCb.

    PubMed

    Aaij, R; Adeva, B; Adinolfi, M; Affolder, A; Ajaltouni, Z; Akar, S; Albrecht, J; Alessio, F; Alexander, M; Ali, S; Alkhazov, G; Alvarez Cartelle, P; Alves, A A; Amato, S; Amerio, S; Amhis, Y; An, L; Anderlini, L; Anderson, J; Andreotti, M; Andrews, J E; Appleby, R B; Aquines Gutierrez, O; Archilli, F; d'Argent, P; Artamonov, A; Artuso, M; Aslanides, E; Auriemma, G; Baalouch, M; Bachmann, S; Back, J J; Badalov, A; Baesso, C; Baldini, W; Barlow, R J; Barschel, C; Barsuk, S; Barter, W; Batozskaya, V; Battista, V; Bay, A; Beaucourt, L; Beddow, J; Bedeschi, F; Bediaga, I; Bel, L J; Belyaev, I; Ben-Haim, E; Bencivenni, G; Benson, S; Benton, J; Berezhnoy, A; Bernet, R; Bertolin, A; Bettler, M-O; van Beuzekom, M; Bien, A; Bifani, S; Bird, T; Birnkraut, A; Bizzeti, A; Blake, T; Blanc, F; Blouw, J; Blusk, S; Bocci, V; Bondar, A; Bondar, N; Bonivento, W; Borghi, S; Borgia, A; Borsato, M; Bowcock, T J V; Bowen, E; Bozzi, C; Brett, D; Britsch, M; Britton, T; Brodzicka, J; Brook, N H; Bursche, A; Buytaert, J; Cadeddu, S; Calabrese, R; Calvi, M; Calvo Gomez, M; Campana, P; Campora Perez, D; Capriotti, L; Carbone, A; Carboni, G; Cardinale, R; Cardini, A; Carniti, P; Carson, L; Carvalho Akiba, K; Casanova Mohr, R; Casse, G; Cassina, L; Castillo Garcia, L; Cattaneo, M; Cauet, Ch; Cavallero, G; Cenci, R; Charles, M; Charpentier, Ph; Chefdeville, M; Chen, S; Cheung, S F; Chiapolini, N; Chrzaszcz, M; Cid Vidal, X; Ciezarek, G; Clarke, P E L; Clemencic, M; Cliff, H V; Closier, J; Coco, V; Cogan, J; Cogneras, E; Cogoni, V; Cojocariu, L; Collazuol, G; Collins, P; Comerma-Montells, A; Contu, A; Cook, A; Coombes, M; Coquereau, S; Corti, G; Corvo, M; Counts, I; Couturier, B; Cowan, G A; Craik, D C; Crocombe, A; Cruz Torres, M; Cunliffe, S; Currie, R; D'Ambrosio, C; Dalseno, J; David, P N Y; Davis, A; De Bruyn, K; De Capua, S; De Cian, M; De Miranda, J M; De Paula, L; De Silva, W; De Simone, P; Dean, C T; Decamp, D; Deckenhoff, M; Del Buono, L; Déléage, N; Derkach, D; Deschamps, O; Dettori, F; Dey, B; Di Canto, A; Di Ruscio, F; Dijkstra, H; Donleavy, S; Dordei, F; Dorigo, M; Dosil Suárez, A; Dossett, D; Dovbnya, A; Dreimanis, K; Dujany, G; Dupertuis, F; Durante, P; Dzhelyadin, R; Dziurda, A; Dzyuba, A; Easo, S; Egede, U; Egorychev, V; Eidelman, S; Eisenhardt, S; Eitschberger, U; Ekelhof, R; Eklund, L; El Rifai, I; Elsasser, Ch; Ely, S; Esen, S; Evans, H M; Evans, T; Falabella, A; Färber, C; Farinelli, C; Farley, N; Farry, S; Fay, R; Ferguson, D; Fernandez Albor, V; Ferrari, F; Ferreira Rodrigues, F; Ferro-Luzzi, M; Filippov, S; Fiore, M; Fiorini, M; Firlej, M; Fitzpatrick, C; Fiutowski, T; Fol, P; Fontana, M; Fontanelli, F; Forty, R; Francisco, O; Frank, M; Frei, C; Frosini, M; Fu, J; Furfaro, E; Gallas Torreira, A; Galli, D; Gallorini, S; Gambetta, S; Gandelman, M; Gandini, P; Gao, Y; García Pardiñas, J; Garofoli, J; Garra Tico, J; Garrido, L; Gascon, D; Gaspar, C; Gauld, R; Gavardi, L; Gazzoni, G; Geraci, A; Gerick, D; Gersabeck, E; Gersabeck, M; Gershon, T; Ghez, Ph; Gianelle, A; Gianì, S; Gibson, V; Giubega, L; Gligorov, V V; Göbel, C; Golubkov, D; Golutvin, A; Gomes, A; Gotti, C; Grabalosa Gándara, M; Graciani Diaz, R; Granado Cardoso, L A; Graugés, E; Graverini, E; Graziani, G; Grecu, A; Greening, E; Gregson, S; Griffith, P; Grillo, L; Grünberg, O; Gui, B; Gushchin, E; Guz, Yu; Gys, T; Hadjivasiliou, C; Haefeli, G; Haen, C; Haines, S C; Hall, S; Hamilton, B; Hampson, T; Han, X; Hansmann-Menzemer, S; Harnew, N; Harnew, S T; Harrison, J; He, J; Head, T; Heijne, V; Hennessy, K; Henrard, P; Henry, L; Hernando Morata, J A; van Herwijnen, E; Heß, M; Hicheur, A; Hill, D; Hoballah, M; Hombach, C; Hulsbergen, W; Humair, T; Hussain, N; Hutchcroft, D; Hynds, D; Idzik, M; Ilten, P; Jacobsson, R; Jaeger, A; Jalocha, J; Jans, E; Jawahery, A; Jing, F; John, M; Johnson, D; Jones, C R; Joram, C; Jost, B; Jurik, N; Kandybei, S; Kanso, W; Karacson, M; Karbach, T M; Karodia, S; Kelsey, M; Kenyon, I R; Kenzie, M; Ketel, T; Khanji, B; Khurewathanakul, C; Klaver, S; Klimaszewski, K; Kochebina, O; Kolpin, M; Komarov, I; Koopman, R F; Koppenburg, P; Kravchuk, L; Kreplin, K; Kreps, M; Krocker, G; Krokovny, P; Kruse, F; Kucewicz, W; Kucharczyk, M; Kudryavtsev, V; Kurek, K; Kvaratskheliya, T; La Thi, V N; Lacarrere, D; Lafferty, G; Lai, A; Lambert, D; Lambert, R W; Lanfranchi, G; Langenbruch, C; Langhans, B; Latham, T; Lazzeroni, C; Le Gac, R; van Leerdam, J; Lees, J P; Lefèvre, R; Leflat, A; Lefrançois, J; Leroy, O; Lesiak, T; Leverington, B; Li, Y; Likhomanenko, T; Liles, M; Lindner, R; Linn, C; Lionetto, F; Liu, B; Lohn, S; Longstaff, I; Lopes, J H; Lucchesi, D; Luo, H; Lupato, A; Luppi, E; Lupton, O; Machefert, F; Machikhiliyan, I V; Maciuc, F; Maev, O; Malde, S; Malinin, A; Manca, G; Mancinelli, G; Manning, P; Mapelli, A; Maratas, J; Marchand, J F; Marconi, U; Marin Benito, C; Marino, P; Märki, R; Marks, J; Martellotti, G; Martinelli, M; Martinez Santos, D; Martinez Vidal, F; Martins Tostes, D; Massafferri, A; Matev, R; Mathe, Z; Matteuzzi, C; Mauri, A; Maurin, B; Mazurov, A; McCann, M; McCarthy, J; McNab, A; McNulty, R; McSkelly, B; Meadows, B; Meier, F; Meissner, M; Merk, M; Milanes, D A; Minard, M N; Mitzel, D S; Molina Rodriguez, J; Monteil, S; Morandin, M; Morawski, P; Mordà, A; Morello, M J; Moron, J; Morris, A B; Mountain, R; Muheim, F; Müller, J; Müller, K; Müller, V; Mussini, M; Muster, B; Naik, P; Nakada, T; Nandakumar, R; Nasteva, I; Needham, M; Neri, N; Neubert, S; Neufeld, N; Neuner, M; Nguyen, A D; Nguyen, T D; Nguyen-Mau, C; Niess, V; Niet, R; Nikitin, N; Nikodem, T; Novoselov, A; O'Hanlon, D P; Oblakowska-Mucha, A; Obraztsov, V; Ogilvy, S; Okhrimenko, O; Oldeman, R; Onderwater, C J G; Osorio Rodrigues, B; Otalora Goicochea, J M; Otto, A; Owen, P; Oyanguren, A; Palano, A; Palombo, F; Palutan, M; Panman, J; Papanestis, A; Pappagallo, M; Pappalardo, L L; Parkes, C; Passaleva, G; Patel, G D; Patel, M; Patrignani, C; Pearce, A; Pellegrino, A; Penso, G; Pepe Altarelli, M; Perazzini, S; Perret, P; Pescatore, L; Petridis, K; Petrolini, A; Picatoste Olloqui, E; Pietrzyk, B; Pilař, T; Pinci, D; Pistone, A; Playfer, S; Plo Casasus, M; Poikela, T; Polci, F; Poluektov, A; Polyakov, I; Polycarpo, E; Popov, A; Popov, D; Popovici, B; Potterat, C; Price, E; Price, J D; Prisciandaro, J; Pritchard, A; Prouve, C; Pugatch, V; Puig Navarro, A; Punzi, G; Qian, W; Quagliani, R; Rachwal, B; Rademacker, J H; Rakotomiaramanana, B; Rama, M; Rangel, M S; Raniuk, I; Rauschmayr, N; Raven, G; Redi, F; Reichert, S; Reid, M M; Dos Reis, A C; Ricciardi, S; Richards, S; Rihl, M; Rinnert, K; Rives Molina, V; Robbe, P; Rodrigues, A B; Rodrigues, E; Rodriguez Perez, P; Roiser, S; Romanovsky, V; Romero Vidal, A; Rotondo, M; Rouvinet, J; Ruf, T; Ruiz, H; Ruiz Valls, P; Saborido Silva, J J; Sagidova, N; Sail, P; Saitta, B; Salustino Guimaraes, V; Sanchez Mayordomo, C; Sanmartin Sedes, B; Santacesaria, R; Santamarina Rios, C; Santovetti, E; Sarti, A; Satriano, C; Satta, A; Saunders, D M; Savrina, D; Schiller, M; Schindler, H; Schlupp, M; Schmelling, M; Schmelzer, T; Schmidt, B; Schneider, O; Schopper, A; Schune, M H; Schwemmer, R; Sciascia, B; Sciubba, A; Semennikov, A; Sepp, I; Serra, N; Serrano, J; Sestini, L; Seyfert, P; Shapkin, M; Shapoval, I; Shcheglov, Y; Shears, T; Shekhtman, L; Shevchenko, V; Shires, A; Silva Coutinho, R; Simi, G; Sirendi, M; Skidmore, N; Skillicorn, I; Skwarnicki, T; Smith, E; Smith, E; Smith, J; Smith, M; Snoek, H; Sokoloff, M D; Soler, F J P; Soomro, F; Souza, D; Souza De Paula, B; Spaan, B; Spradlin, P; Sridharan, S; Stagni, F; Stahl, M; Stahl, S; Steinkamp, O; Stenyakin, O; Sterpka, F; Stevenson, S; Stoica, S; Stone, S; Storaci, B; Stracka, S; Straticiuc, M; Straumann, U; Stroili, R; Sun, L; Sutcliffe, W; Swientek, K; Swientek, S; Syropoulos, V; Szczekowski, M; Szczypka, P; Szumlak, T; T'Jampens, S; Tekampe, T; Teklishyn, M; Tellarini, G; Teubert, F; Thomas, C; Thomas, E; van Tilburg, J; Tisserand, V; Tobin, M; Todd, J; Tolk, S; Tomassetti, L; Tonelli, D; Topp-Joergensen, S; Torr, N; Tournefier, E; Tourneur, S; Trabelsi, K; Tran, M T; Tresch, M; Trisovic, A; Tsaregorodtsev, A; Tsopelas, P; Tuning, N; Ubeda Garcia, M; Ukleja, A; Ustyuzhanin, A; Uwer, U; Vacca, C; Vagnoni, V; Valenti, G; Vallier, A; Vazquez Gomez, R; Vazquez Regueiro, P; Vázquez Sierra, C; Vecchi, S; Velthuis, J J; Veltri, M; Veneziano, G; Vesterinen, M; Viaud, B; Vieira, D; Vieites Diaz, M; Vilasis-Cardona, X; Vollhardt, A; Volyanskyy, D; Voong, D; Vorobyev, A; Vorobyev, V; Voß, C; de Vries, J A; Waldi, R; Wallace, C; Wallace, R; Walsh, J; Wandernoth, S; Wang, J; Ward, D R; Watson, N K; Websdale, D; Weiden, A; Whitehead, M; Wiedner, D; Wilkinson, G; Wilkinson, M; Williams, M; Williams, M P; Williams, M; Wilson, F F; Wimberley, J; Wishahi, J; Wislicki, W; Witek, M; Wormser, G; Wotton, S A; Wright, S; Wyllie, K; Xie, Y; Xu, Z; Yang, Z; Yuan, X; Yushchenko, O; Zangoli, M; Zavertyaev, M; Zhang, L; Zhang, Y; Zhelezov, A; Zhokhov, A; Zhong, L

    A search is performed for heavy long-lived charged particles using 3.0 [Formula: see text] of proton-proton collisions collected at [Formula: see text][Formula: see text] 7 and 8  TeV with the LHCb detector. The search is mainly based on the response of the ring imaging Cherenkov detectors to distinguish the heavy, slow-moving particles from muons. No evidence is found for the production of such long-lived states. The results are expressed as limits on the Drell-Yan production of pairs of long-lived particles, with both particles in the LHCb pseudorapidity acceptance, [Formula: see text]. The mass-dependent cross-section upper limits are in the range 2-4 fb (at 95 % CL) for masses between 14 and 309 [Formula: see text].

  7. Skinny Is Not Enough: A Content Analysis of Fitspiration on Pinterest.

    PubMed

    Simpson, Courtney C; Mazzeo, Suzanne E

    2017-05-01

    Fitspiration is a relatively new social media trend nominally intended to promote health and fitness. Fitspiration messages are presented as encouraging; however, they might also engender body dissatisfaction and compulsive exercise. This study analyzed fitspiration content (n = 1050) on the image-based social media platform Pinterest. Independent raters coded the images and text present in the posts. Messages were categorized as appearance- or health-related, and coded for Social Cognitive Theory constructs: standards, behaviors, and outcome expectancies. Messages encouraged appearance-related body image standards and weight management behaviors more frequently than health-related standards and behaviors, and emphasized attractiveness as motivation to partake in such behaviors. Results also indicated that fitspiration messages include a comparable amount of fit praise (i.e., emphasis on toned/defined muscles) and thin praise (i.e., emphasis on slenderness), suggesting that women are not only supposed to be thin but also fit. Considering the negative outcomes associated with both exposure to idealized body images and exercising for appearance reasons, findings suggest that fitspiration messages are problematic, especially for viewers with high risk of eating disorders and related issues.

  8. A method of non-contact reading code based on computer vision

    NASA Astrophysics Data System (ADS)

    Zhang, Chunsen; Zong, Xiaoyu; Guo, Bingxuan

    2018-03-01

    With the purpose of guarantee the computer information exchange security between internal and external network (trusted network and un-trusted network), A non-contact Reading code method based on machine vision has been proposed. Which is different from the existing network physical isolation method. By using the computer monitors, camera and other equipment. Deal with the information which will be on exchanged, Include image coding ,Generate the standard image , Display and get the actual image , Calculate homography matrix, Image distort correction and decoding in calibration, To achieve the computer information security, Non-contact, One-way transmission between the internal and external network , The effectiveness of the proposed method is verified by experiments on real computer text data, The speed of data transfer can be achieved 24kb/s. The experiment shows that this algorithm has the characteristics of high security, fast velocity and less loss of information. Which can meet the daily needs of the confidentiality department to update the data effectively and reliably, Solved the difficulty of computer information exchange between Secret network and non-secret network, With distinctive originality, practicability, and practical research value.

  9. Precise and Efficient Retrieval of Captioned Images: The MARIE Project.

    ERIC Educational Resources Information Center

    Rowe, Neil C.

    1999-01-01

    The MARIE project explores knowledge-based information retrieval of captioned images of the kind found in picture libraries and on the Internet. MARIE's five-part approach exploits the idea that images are easier to understand with context, especially descriptive text near them, but it also does image analysis. Experiments show MARIE prototypes…

  10. Low-Achieving Readers, High Expectations: Image Theatre Encourages Critical Literacy

    ERIC Educational Resources Information Center

    Rozansky, Carol Lloyd; Aagesen, Colleen

    2010-01-01

    Students in an eighth-grade, urban, low-achieving reading class were introduced to critical literacy through engagement in Image Theatre. Developed by liberatory dramatist Augusto Boal, Image Theatre gives participants the opportunity to examine texts in the triple role of interpreter, artist, and sculptor (i.e., image creator). The researchers…

  11. Science Museum Series - Speed, Time, Space, and Flight

    NASA Astrophysics Data System (ADS)

    Wilkinson, Philip

    2004-04-01

    This four-volume set explores some of the most popular areas of science and invention. It is produced in collaboration with the Science Museum in London, which houses one of the most remarkable science collections in the world. Each book takes one area of our inventiveness and reveals our progress through time, highlighting the key developments and ending with the state-of-the-art technology of today. Each story is told with brief, lively text linked to the four-color images and includes a glossary and index.

  12. Informatics in radiology (infoRAD): HTML and Web site design for the radiologist: a primer.

    PubMed

    Ryan, Anthony G; Louis, Luck J; Yee, William C

    2005-01-01

    A Web site has enormous potential as a medium for the radiologist to store, present, and share information in the form of text, images, and video clips. With a modest amount of tutoring and effort, designing a site can be as painless as preparing a Microsoft PowerPoint presentation. The site can then be used as a hub for the development of further offshoots (eg, Web-based tutorials, storage for a teaching library, publication of information about one's practice, and information gathering from a wide variety of sources). By learning the basics of hypertext markup language (HTML), the reader will be able to produce a simple and effective Web page that permits display of text, images, and multimedia files. The process of constructing a Web page can be divided into five steps: (a) creating a basic template with formatted text, (b) adding color, (c) importing images and multimedia files, (d) creating hyperlinks, and (e) uploading one's page to the Internet. This Web page may be used as the basis for a Web-based tutorial comprising text documents and image files already in one's possession. Finally, there are many commercially available packages for Web page design that require no knowledge of HTML.

  13. Ocular higher-order aberrations and axial eye growth in young Hong Kong children.

    PubMed

    Lau, Jason K; Vincent, Stephen J; Collins, Michael J; Cheung, Sin-Wan; Cho, Pauline

    2018-04-30

    This retrospective longitudinal analysis aimed to investigate the association between ocular higher-order aberrations (HOAs) and axial eye growth in Hong Kong children. Measures of axial length and ocular HOAs under cycloplegia were obtained annually over a two-year period from 137 subjects aged 8.8 ± 1.4 years with mean spherical equivalent refraction of -2.04 ± 2.38 D. A significant negative association was observed between the RMS of total HOAs and axial eye growth (P = 0.03), after adjusting for other significant predictors of axial length including age, sex and refractive error. Similar negative associations with axial elongation were found for the RMS of spherical aberrations ([Formula: see text] and [Formula: see text] combined) (P = 0.037). Another linear mixed model also showed that greater levels of vertical trefoil [Formula: see text], primary spherical aberration [Formula: see text] and negative oblique trefoil [Formula: see text] were associated with slower axial elongation and longer axial length (all P < 0.05). These findings support the potential role of HOAs, image quality and a vision-dependent mechanism in childhood eye growth.

  14. The use of the liquid crystal display (LCD) panel as a teaching aid in medical lectures.

    PubMed

    Wong, K T

    1992-01-01

    The liquid crystal display (LCD) panel is designed to project on-screen information of a microcomputer onto a larger screen with the aid of a standard overhead projector, so that large audiences may view on-screen information without having to crowd around the TV monitor. As little has been written about its use as a visual aid in medical teaching, the present report documents its use in a series of pathology lectures delivered, over a 2-year period, to two classes of about 150 medical students each. Some advantages of the LCD panel over the 35mm slide include the flexibility of last-minute text changes and less lead time needed for text preparation. It eliminates the problems of messy last-minute changes in, and improves legibility of, handwritten overhead projector transparencies. The disadvantages of using an LCD panel include the relatively bulky equipment which may pose transport problems, image clarity that is inferior to the 35mm slide, and equipment costs.

  15. 1995 Joseph E. Whitley, MD, Award. A World Wide Web gateway to the radiologic learning file.

    PubMed

    Channin, D S

    1995-12-01

    Computer networks in general, and the Internet specifically, are changing the way information is manipulated in the world at large and in radiology. The goal of this project was to develop a computer system in which images from the Radiologic Learning File, available previously only via a single-user laser disc, are made available over a generic, high-availability computer network to many potential users simultaneously. Using a networked workstation in our laboratory and freely available distributed hypertext software, we established a World Wide Web (WWW) information server for radiology. Images from the Radiologic Learning File are requested through the WWW client software, digitized from a single laser disc containing the entire teaching file and then transmitted over the network to the client. The text accompanying each image is incorporated into the transmitted document. The Radiologic Learning File is now on-line, and requests to view the cases result in the delivery of the text and images. Image digitization via a frame grabber takes 1/30th of a second. Conversion of the image to a standard computer graphic format takes 45-60 sec. Text and image transmission speed on a local area network varies between 200 and 400 kilobytes (KB) per second depending on the network load. We have made images from a laser disc of the Radiologic Learning File available through an Internet-based hypertext server. The images previously available through a single-user system located in a remote section of our department are now ubiquitously available throughout our department via the department's computer network. We have thus converted a single-user, limited functionality system into a multiuser, widely available resource.

  16. DR HAGIS-a fundus image database for the automatic extraction of retinal surface vessels from diabetic patients.

    PubMed

    Holm, Sven; Russell, Greg; Nourrit, Vincent; McLoughlin, Niall

    2017-01-01

    A database of retinal fundus images, the DR HAGIS database, is presented. This database consists of 39 high-resolution color fundus images obtained from a diabetic retinopathy screening program in the UK. The NHS screening program uses service providers that employ different fundus and digital cameras. This results in a range of different image sizes and resolutions. Furthermore, patients enrolled in such programs often display other comorbidities in addition to diabetes. Therefore, in an effort to replicate the normal range of images examined by grading experts during screening, the DR HAGIS database consists of images of varying image sizes and resolutions and four comorbidity subgroups: collectively defined as the diabetic retinopathy, hypertension, age-related macular degeneration, and Glaucoma image set (DR HAGIS). For each image, the vasculature has been manually segmented to provide a realistic set of images on which to test automatic vessel extraction algorithms. Modified versions of two previously published vessel extraction algorithms were applied to this database to provide some baseline measurements. A method based purely on the intensity of images pixels resulted in a mean segmentation accuracy of 95.83% ([Formula: see text]), whereas an algorithm based on Gabor filters generated an accuracy of 95.71% ([Formula: see text]).

  17. tranSMART-XNAT Connector tranSMART-XNAT connector-image selection based on clinical phenotypes and genetic profiles.

    PubMed

    He, Sijin; Yong, May; Matthews, Paul M; Guo, Yike

    2017-03-01

    TranSMART has a wide range of functionalities for translational research and a large user community, but it does not support imaging data. In this context, imaging data typically includes 2D or 3D sets of magnitude data and metadata information. Imaging data may summarise complex feature descriptions in a less biased fashion than user defined plain texts and numeric numbers. Imaging data also is contextualised by other data sets and may be analysed jointly with other data that can explain features or their variation. Here we describe the tranSMART-XNAT Connector we have developed. This connector consists of components for data capture, organisation and analysis. Data capture is responsible for imaging capture either from PACS system or directly from an MRI scanner, or from raw data files. Data are organised in a similar fashion as tranSMART and are stored in a format that allows direct analysis within tranSMART. The connector enables selection and download of DICOM images and associated resources using subjects' clinical phenotypic and genotypic criteria. tranSMART-XNAT connector is written in Java/Groovy/Grails. It is maintained and available for download at https://github.com/sh107/transmart-xnat-connector.git. sijin@ebi.ac.uk. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  18. Tailoring four-dimensional cone-beam CT acquisition settings for fiducial marker-based image guidance in radiation therapy.

    PubMed

    Jin, Peng; van Wieringen, Niek; Hulshof, Maarten C C M; Bel, Arjan; Alderliesten, Tanja

    2018-04-01

    Use of four-dimensional cone-beam CT (4D-CBCT) and fiducial markers for image guidance during radiation therapy (RT) of mobile tumors is challenging due to the trade-off among image quality, imaging dose, and scanning time. This study aimed to investigate different 4D-CBCT acquisition settings for good visibility of fiducial markers in 4D-CBCT. Using these 4D-CBCTs, the feasibility of marker-based 4D registration for RT setup verification and manual respiration-induced motion quantification was investigated. For this, we applied a dynamic phantom with three different breathing motion amplitudes and included two patients with implanted markers. Irrespective of the motion amplitude, for a medium field of view (FOV), marker visibility was improved by reducing the imaging dose per projection and increasing the number of projection images; however, the scanning time was 4 to 8 min. For a small FOV, the total imaging dose and the scanning time were reduced (62.5% of the dose using a medium FOV, 2.5 min) without losing marker visibility. However, the body contour could be missing for a small FOV, which is not preferred in RT. The marker-based 4D setup verification was feasible for both the phantom and patient data. Moreover, manual marker motion quantification can achieve a high accuracy with a mean error of [Formula: see text].

  19. Bladder accumulated dose in image-guided high-dose-rate brachytherapy for locally advanced cervical cancer and its relation to urinary toxicity

    NASA Astrophysics Data System (ADS)

    Zakariaee, Roja; Hamarneh, Ghassan; Brown, Colin J.; Gaudet, Marc; Aquino-Parsons, Christina; Spadinger, Ingrid

    2016-12-01

    The purpose of this study was to estimate locally accumulated dose to the bladder in multi-fraction high-dose-date (HDR) image-guided intracavitary brachytherapy (IG-ICBT) for cervical cancer, and study the locally-accumulated dose parameters as predictors of late urinary toxicity. A retrospective study of 60 cervical cancer patients who received five HDR IG-ICBT sessions was performed. The bladder outer and inner surfaces were segmented for all sessions and a bladder-wall contour point-set was created in MATLAB. The bladder-wall point-sets for each patient were registered using a deformable point-set registration toolbox called coherent point drift (CPD), and the fraction doses were accumulated. Various dosimetric and volumetric parameters were calculated using the registered doses, including r{{\\text{D}}n \\text{c{{\\text{m}}\\text{3}}}} (minimum dose to the most exposed n-cm3 volume of bladder wall), r V n Gy (wall volume receiving at least m Gy), and r\\text{EQD}{{2}n \\text{c{{\\text{m}}\\text{3}}}} (minimum equivalent biologically weighted dose to the most exposed n-cm3 of bladder wall), where n  =  1/2/5/10 and m  =  3/5/10. Minimum dose to contiguous 1 and 2 cm3 hot-spot volumes was also calculated. The unregistered dose volume histogram (DVH)-summed equivalent of r{{\\text{D}}n \\text{c{{\\text{m}}3}}} and r\\text{EQD}{{2}n \\text{c{{\\text{m}}3}}} parameters (i.e. s{{\\text{D}}n \\text{c{{\\text{m}}\\text{3}}}} and s\\text{EQD}{{2}n \\text{c{{\\text{m}}3}}} ) were determined for comparison. Late urinary toxicity was assessed using the LENT-SOMA scale, with toxicity Grade 0-1 categorized as Controls and Grade 2-4 as Cases. A two-sample t-test was used to identify the differences between the means of Control and Case groups for all parameters. A binomial logistic regression was also performed between the registered dose parameters and toxicity grouping. Seventeen patients were in the Case and 43 patients in the Control group. Contiguous values were on average 16 and 18% smaller than parameters for 1 and 2 cm3 volumes, respectively. Contiguous values were on average 26 and 27% smaller than parameters. The only statistically significant finding for Case versus Control based on both methods of analysis was observed for r V3 Gy (p  =  0.01). DVH-summed parameters based on unregistered structure volumes overestimated the bladder dose in our patients, particularly when contiguous high dose volumes were considered. The bladder-wall volume receiving at least 3 Gy of accumulated dose may be a parameter of interest in further investigations of Grade 2+  urinary toxicity.

  20. Public sentiment and discourse about Zika virus on Instagram.

    PubMed

    Seltzer, E K; Horst-Martz, E; Lu, M; Merchant, R M

    2017-09-01

    Social media have strongly influenced the awareness and perceptions of public health emergencies, and a considerable amount of social media content is now shared through images, rather than text alone. This content can impact preparedness and response due to the popularity and real-time nature of social media platforms. We sought to explore how the image-sharing platform Instagram is used for information dissemination and conversation during the current Zika outbreak. This was a retrospective review of publicly posted images about Zika on Instagram. Using the keyword '#zika' we identified 500 images posted on Instagram from May to August 2016. Images were coded by three reviewers and contextual information was collected for each image about sentiment, image type, content, audience, geography, reliability, and engagement. Of 500 images tagged with #zika, 342 (68%) contained content actually related to Zika. Of the 342 Zika-specific images, 299 were coded as 'health' and 193 were coded 'public interest'. Some images had multiple 'health' and 'public interest' codes. Health images tagged with #zika were primarily related to transmission (43%, 129/299) and prevention (48%, 145/299). Transmission-related posts were more often mosquito-human transmission (73%, 94/129) than human-human transmission (27%, 35/129). Mosquito bite prevention posts outnumbered safe sex prevention; (84%, 122/145) and (16%, 23/145) respectively. Images with a target audience were primarily aimed at women (95%, 36/38). Many posts (60%, 61/101) included misleading, incomplete, or unclear information about the virus. Additionally, many images expressed fear and negative sentiment, (79/156, 51%). Instagram can be used to characterize public sentiment and highlight areas of focus for public health, such as correcting misleading or incomplete information or expanding messages to reach diverse audiences. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  1. The schemes and methods for producing of the visual security features used in the color hologram stereography

    NASA Astrophysics Data System (ADS)

    Lushnikov, D. S.; Zherdev, A. Y.; Odinokov, S. B.; Markin, V. V.; Smirnov, A. V.

    2017-05-01

    Visual security elements used in color holographic stereograms - three-dimensional colored security holograms - and methods their production is describes in this article. These visual security elements include color micro text, color-hidden image, the horizontal and vertical flip - flop effects by change color and image. The article also presents variants of optical systems that allow record the visual security elements as part of the holographic stereograms. The methods for solving of the optical problems arising in the recording visual security elements are presented. Also noted perception features of visual security elements for verification of security holograms by using these elements. The work was partially funded under the Agreement with the RF Ministry of Education and Science № 14.577.21.0197, grant RFMEFI57715X0197.

  2. The AstroVR Collaboratory, an On-line Multi-User Environment for Research in Astrophysics

    NASA Astrophysics Data System (ADS)

    van Buren, D.; Curtis, P.; Nichols, D. A.; Brundage, M.

    We describe our experiment with an on-line collaborative environment where users share the execution of programs and communicate via audio, video, and typed text. Collaborative environments represent the next step in computer-mediated conferencing, combining powerful compute engines, data persistence, shared applications, and teleconferencing tools. As proof of concept, we have implemented a shared image analysis tool, allowing geographically distinct users to analyze FITS images together. We anticipate that \\htmllink{AstroVR}{http://astrovr.ipac.caltech.edu:8888} and similar systems will become an important part of collaborative work in the next decade, including with applications in remote observing, spacecraft operations, on-line meetings, as well as and day-to-day research activities. The technology is generic and promises to find uses in business, medicine, government, and education.

  3. Multispectral imaging reveals biblical-period inscription unnoticed for half a century

    PubMed Central

    Cordonsky, Michael; Levin, David; Moinester, Murray; Sass, Benjamin; Turkel, Eli; Piasetzky, Eli; Finkelstein, Israel

    2017-01-01

    Most surviving biblical period Hebrew inscriptions are ostraca—ink-on-clay texts. They are poorly preserved and once unearthed, fade rapidly. Therefore, proper and timely documentation of ostraca is essential. Here we show a striking example of a hitherto invisible text on the back side of an ostracon revealed via multispectral imaging. This ostracon, found at the desert fortress of Arad and dated to ca. 600 BCE (the eve of Judah’s destruction by Nebuchadnezzar), has been on display for half a century. Its front side has been thoroughly studied, while its back side was considered blank. Our research revealed three lines of text on the supposedly blank side and four "new" lines on the front side. Our results demonstrate the need for multispectral image acquisition for both sides of all ancient ink ostraca. Moreover, in certain cases we recommend employing multispectral techniques for screening newly unearthed ceramic potsherds prior to disposal. PMID:28614416

  4. Multispectral imaging reveals biblical-period inscription unnoticed for half a century.

    PubMed

    Faigenbaum-Golovin, Shira; Mendel-Geberovich, Anat; Shaus, Arie; Sober, Barak; Cordonsky, Michael; Levin, David; Moinester, Murray; Sass, Benjamin; Turkel, Eli; Piasetzky, Eli; Finkelstein, Israel

    2017-01-01

    Most surviving biblical period Hebrew inscriptions are ostraca-ink-on-clay texts. They are poorly preserved and once unearthed, fade rapidly. Therefore, proper and timely documentation of ostraca is essential. Here we show a striking example of a hitherto invisible text on the back side of an ostracon revealed via multispectral imaging. This ostracon, found at the desert fortress of Arad and dated to ca. 600 BCE (the eve of Judah's destruction by Nebuchadnezzar), has been on display for half a century. Its front side has been thoroughly studied, while its back side was considered blank. Our research revealed three lines of text on the supposedly blank side and four "new" lines on the front side. Our results demonstrate the need for multispectral image acquisition for both sides of all ancient ink ostraca. Moreover, in certain cases we recommend employing multispectral techniques for screening newly unearthed ceramic potsherds prior to disposal.

  5. Text, photo, and line extraction in scanned documents

    NASA Astrophysics Data System (ADS)

    Erkilinc, M. Sezer; Jaber, Mustafa; Saber, Eli; Bauer, Peter; Depalov, Dejan

    2012-07-01

    We propose a page layout analysis algorithm to classify a scanned document into different regions such as text, photo, or strong lines. The proposed scheme consists of five modules. The first module performs several image preprocessing techniques such as image scaling, filtering, color space conversion, and gamma correction to enhance the scanned image quality and reduce the computation time in later stages. Text detection is applied in the second module wherein wavelet transform and run-length encoding are employed to generate and validate text regions, respectively. The third module uses a Markov random field based block-wise segmentation that employs a basis vector projection technique with maximum a posteriori probability optimization to detect photo regions. In the fourth module, methods for edge detection, edge linking, line-segment fitting, and Hough transform are utilized to detect strong edges and lines. In the last module, the resultant text, photo, and edge maps are combined to generate a page layout map using K-Means clustering. The proposed algorithm has been tested on several hundred documents that contain simple and complex page layout structures and contents such as articles, magazines, business cards, dictionaries, and newsletters, and compared against state-of-the-art page-segmentation techniques with benchmark performance. The results indicate that our methodology achieves an average of ˜89% classification accuracy in text, photo, and background regions.

  6. A semantic model for multimodal data mining in healthcare information systems.

    PubMed

    Iakovidis, Dimitris; Smailis, Christos

    2012-01-01

    Electronic health records (EHRs) are representative examples of multimodal/multisource data collections; including measurements, images and free texts. The diversity of such information sources and the increasing amounts of medical data produced by healthcare institutes annually, pose significant challenges in data mining. In this paper we present a novel semantic model that describes knowledge extracted from the lowest-level of a data mining process, where information is represented by multiple features i.e. measurements or numerical descriptors extracted from measurements, images, texts or other medical data, forming multidimensional feature spaces. Knowledge collected by manual annotation or extracted by unsupervised data mining from one or more feature spaces is modeled through generalized qualitative spatial semantics. This model enables a unified representation of knowledge across multimodal data repositories. It contributes to bridging the semantic gap, by enabling direct links between low-level features and higher-level concepts e.g. describing body parts, anatomies and pathological findings. The proposed model has been developed in web ontology language based on description logics (OWL-DL) and can be applied to a variety of data mining tasks in medical informatics. It utility is demonstrated for automatic annotation of medical data.

  7. Forensic implications: adolescent sexting and cyberbullying.

    PubMed

    Korenis, Panagiota; Billick, Stephen Bates

    2014-03-01

    Adolescence is marked by establishing a sense of identity, core values, a sense of one's relationship to the outside world and heightened peer relationships. In addition, there is also risk taking, impulsivity, self exploration and dramatic increase in sexuality. The dramatic increase in the use of cell phones and the Internet has additional social implications of sexting and cyberbullying. Sexting refers to the practice of sending sexually explicit material including language or images to another person's cell phone. Cyberbullying refers to the use of this technology to socially exclude, threaten, insult or shame another person. Studies of cell phone use in the 21st century report well over 50% of adolescents use them and that text messaging is the communication mode of choice. Studies also show a significant percentage of adolescents send and receive sex messaging, both text and images. This paper will review this expanding literature. Various motivations for sexting will also be reviewed. This new technology presents many dangers for adolescents. The legal implications are extensive and psychiatrists may play an important role in evaluation of some of these adolescents in the legal context. This paper will also make suggestions on future remedies and preventative actions.

  8. Green emission fluorophores in eyes with atrophic age-related macular degeneration: a colour fundus autofluorescence pilot study.

    PubMed

    Borrelli, Enrico; Lei, Jianqin; Balasubramanian, Siva; Uji, Akihito; Cozzi, Mariano; Sarao, Valentina; Lanzetta, Paolo; Staurenghi, Giovanni; Sadda, SriniVas R

    2018-06-01

    To investigate the presence of short-wave fluorophores within regions of age-related macular degeneration (AMD)-associated macular atrophy (MA) area. This is a prospective, observational, cross-sectional case series. 25 eyes (18 patients) with late AMD and clinically identified MA were enrolled. Eyes were imaged using a confocal light-emitting diode blue-light fundus autofluorescence (FAF) device (EIDON, CenterVue, Padua, Italy) with 450 nm excitation wavelength and the capability for 'colour' FAF imaging, including both the individual red and green components of the emission spectrum. To produce images with a high contrast for isolating the green component, the red component was subtracted from the total FAF image. The main outcome measure was the presence of green emission fluorescence component (GEFC) within the MA area. Volume spectral domain optical coherence tomography (SD-OCT) scans were obtained through the macula and the OCT was correlated with the MA lesions identified on the FAF images, including regions of increased GEFC. Of the investigated eyes, 11 out of 25 (44.0 %) showed the absence of GEFC in the MA area, whereas 14 eyes (56.0%) were characterised by GEFC within the MA area. The presence and distribution of GEFC in the MA area correlated with the presence of hyper-reflective material over Bruch's membrane on the corresponding SD-OCT scans. Short-wave fluorophores, which contribute to the GEFC, are present in the MA area and appear to correspond to residual debris or drusenoid material. Short-wavelength fluorophores revealed by colour FAF imaging may warrant further study. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  9. Image based book cover recognition and retrieval

    NASA Astrophysics Data System (ADS)

    Sukhadan, Kalyani; Vijayarajan, V.; Krishnamoorthi, A.; Bessie Amali, D. Geraldine

    2017-11-01

    In this we are developing a graphical user interface using MATLAB for the users to check the information related to books in real time. We are taking the photos of the book cover using GUI, then by using MSER algorithm it will automatically detect all the features from the input image, after this it will filter bifurcate non-text features which will be based on morphological difference between text and non-text regions. We implemented a text character alignment algorithm which will improve the accuracy of the original text detection. We will also have a look upon the built in MATLAB OCR recognition algorithm and an open source OCR which is commonly used to perform better detection results, post detection algorithm is implemented and natural language processing to perform word correction and false detection inhibition. Finally, the detection result will be linked to internet to perform online matching. More than 86% accuracy can be obtained by this algorithm.

  10. Script-independent text line segmentation in freestyle handwritten documents.

    PubMed

    Li, Yi; Zheng, Yefeng; Doermann, David; Jaeger, Stefan; Li, Yi

    2008-08-01

    Text line segmentation in freestyle handwritten documents remains an open document analysis problem. Curvilinear text lines and small gaps between neighboring text lines present a challenge to algorithms developed for machine printed or hand-printed documents. In this paper, we propose a novel approach based on density estimation and a state-of-the-art image segmentation technique, the level set method. From an input document image, we estimate a probability map, where each element represents the probability that the underlying pixel belongs to a text line. The level set method is then exploited to determine the boundary of neighboring text lines by evolving an initial estimate. Unlike connected component based methods ( [1], [2] for example), the proposed algorithm does not use any script-specific knowledge. Extensive quantitative experiments on freestyle handwritten documents with diverse scripts, such as Arabic, Chinese, Korean, and Hindi, demonstrate that our algorithm consistently outperforms previous methods [1]-[3]. Further experiments show the proposed algorithm is robust to scale change, rotation, and noise.

  11. Evolving Deep Networks Using HPC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, Steven R.; Rose, Derek C.; Johnston, Travis

    While a large number of deep learning networks have been studied and published that produce outstanding results on natural image datasets, these datasets only make up a fraction of those to which deep learning can be applied. These datasets include text data, audio data, and arrays of sensors that have very different characteristics than natural images. As these “best” networks for natural images have been largely discovered through experimentation and cannot be proven optimal on some theoretical basis, there is no reason to believe that they are the optimal network for these drastically different datasets. Hyperparameter search is thus oftenmore » a very important process when applying deep learning to a new problem. In this work we present an evolutionary approach to searching the possible space of network hyperparameters and construction that can scale to 18, 000 nodes. This approach is applied to datasets of varying types and characteristics where we demonstrate the ability to rapidly find best hyperparameters in order to enable practitioners to quickly iterate between idea and result.« less

  12. A maximally stable extremal region based scene text localization method

    NASA Astrophysics Data System (ADS)

    Xiao, Chengqiu; Ji, Lixin; Gao, Chao; Li, Shaomei

    2015-07-01

    Text localization in natural scene images is an important prerequisite for many content-based image analysis tasks. This paper proposes a novel text localization algorithm. Firstly, a fast pruning algorithm is designed to extract Maximally Stable Extremal Regions (MSER) as basic character candidates. Secondly, these candidates are filtered by using the properties of fitting ellipse and the distribution properties of characters to exclude most non-characters. Finally, a new extremal regions projection merging algorithm is designed to group character candidates into words. Experimental results show that the proposed method has an advantage in speed and achieve relatively high precision and recall rates than the latest published algorithms.

  13. Finding text in color images

    NASA Astrophysics Data System (ADS)

    Zhou, Jiangying; Lopresti, Daniel P.; Tasdizen, Tolga

    1998-04-01

    In this paper, we consider the problem of locating and extracting text from WWW images. A previous algorithm based on color clustering and connected components analysis works well as long as the color of each character is relatively uniform and the typography is fairly simple. It breaks down quickly, however, when these assumptions are violated. In this paper, we describe more robust techniques for dealing with this challenging problem. We present an improved color clustering algorithm that measures similarity based on both RGB and spatial proximity. Layout analysis is also incorporated to handle more complex typography. THese changes significantly enhance the performance of our text detection procedure.

  14. Compress compound images in H.264/MPGE-4 AVC by exploiting spatial correlation.

    PubMed

    Lan, Cuiling; Shi, Guangming; Wu, Feng

    2010-04-01

    Compound images are a combination of text, graphics and natural image. They present strong anisotropic features, especially on the text and graphics parts. These anisotropic features often render conventional compression inefficient. Thus, this paper proposes a novel coding scheme from the H.264 intraframe coding. In the scheme, two new intramodes are developed to better exploit spatial correlation in compound images. The first is the residual scalar quantization (RSQ) mode, where intrapredicted residues are directly quantized and coded without transform. The second is the base colors and index map (BCIM) mode that can be viewed as an adaptive color quantization. In this mode, an image block is represented by several representative colors, referred to as base colors, and an index map to compress. Every block selects its coding mode from two new modes and the previous intramodes in H.264 by rate-distortion optimization (RDO). Experimental results show that the proposed scheme improves the coding efficiency even more than 10 dB at most bit rates for compound images and keeps a comparable efficient performance to H.264 for natural images.

  15. Semantics-Based Intelligent Indexing and Retrieval of Digital Images - A Case Study

    NASA Astrophysics Data System (ADS)

    Osman, Taha; Thakker, Dhavalkumar; Schaefer, Gerald

    The proliferation of digital media has led to a huge interest in classifying and indexing media objects for generic search and usage. In particular, we are witnessing colossal growth in digital image repositories that are difficult to navigate using free-text search mechanisms, which often return inaccurate matches as they typically rely on statistical analysis of query keyword recurrence in the image annotation or surrounding text. In this chapter we present a semantically enabled image annotation and retrieval engine that is designed to satisfy the requirements of commercial image collections market in terms of both accuracy and efficiency of the retrieval process. Our search engine relies on methodically structured ontologies for image annotation, thus allowing for more intelligent reasoning about the image content and subsequently obtaining a more accurate set of results and a richer set of alternatives matchmaking the original query. We also show how our well-analysed and designed domain ontology contributes to the implicit expansion of user queries as well as presenting our initial thoughts on exploiting lexical databases for explicit semantic-based query expansion.

  16. Developing an audiovisual notebook as a self-learning tool in histology: perceptions of teachers and students.

    PubMed

    Campos-Sánchez, Antonio; López-Núñez, Juan-Antonio; Scionti, Giuseppe; Garzón, Ingrid; González-Andrades, Miguel; Alaminos, Miguel; Sola, Tomás

    2014-01-01

    Videos can be used as didactic tools for self-learning under several circumstances, including those cases in which students are responsible for the development of this resource as an audiovisual notebook. We compared students' and teachers' perceptions regarding the main features that an audiovisual notebook should include. Four questionnaires with items about information, images, text and music, and filmmaking were used to investigate students' (n = 115) and teachers' perceptions (n = 28) regarding the development of a video focused on a histological technique. The results show that both students and teachers significantly prioritize informative components, images and filmmaking more than text and music. The scores were significantly higher for teachers than for students for all four components analyzed. The highest scores were given to items related to practical and medically oriented elements, and the lowest values were given to theoretical and complementary elements. For most items, there were no differences between genders. A strong positive correlation was found between the scores given to each item by teachers and students. These results show that both students' and teachers' perceptions tend to coincide for most items, and suggest that audiovisual notebooks developed by students would emphasize the same items as those perceived by teachers to be the most relevant. Further, these findings suggest that the use of video as an audiovisual learning notebook would not only preserve the curricular objectives but would also offer the advantages of self-learning processes. © 2013 American Association of Anatomists.

  17. 4D reconstruction of the past: the image retrieval and 3D model construction pipeline

    NASA Astrophysics Data System (ADS)

    Hadjiprocopis, Andreas; Ioannides, Marinos; Wenzel, Konrad; Rothermel, Mathias; Johnsons, Paul S.; Fritsch, Dieter; Doulamis, Anastasios; Protopapadakis, Eftychios; Kyriakaki, Georgia; Makantasis, Kostas; Weinlinger, Guenther; Klein, Michael; Fellner, Dieter; Stork, Andre; Santos, Pedro

    2014-08-01

    One of the main characteristics of the Internet era we are living in, is the free and online availability of a huge amount of data. This data is of varied reliability and accuracy and exists in various forms and formats. Often, it is cross-referenced and linked to other data, forming a nexus of text, images, animation and audio enabled by hypertext and, recently, by the Web3.0 standard. Our main goal is to enable historians, architects, archaeolo- gists, urban planners and affiliated professionals to reconstruct views of historical monuments from thousands of images floating around the web. This paper aims to provide an update of our progress in designing and imple- menting a pipeline for searching, filtering and retrieving photographs from Open Access Image Repositories and social media sites and using these images to build accurate 3D models of archaeological monuments as well as enriching multimedia of cultural / archaeological interest with metadata and harvesting the end products to EU- ROPEANA. We provide details of how our implemented software searches and retrieves images of archaeological sites from Flickr and Picasa repositories as well as strategies on how to filter the results, on two levels; a) based on their built-in metadata including geo-location information and b) based on image processing and clustering techniques. We also describe our implementation of a Structure from Motion pipeline designed for producing 3D models using the large collection of 2D input images (>1000) retrieved from Internet Repositories.

  18. Digital map data, text, and graphical images in support of the 1995 National assessment of United States oil and gas resources

    USGS Publications Warehouse

    Beeman, William R.; Obuch, Raymond C.; Brewton, James D.

    1996-01-01

    This CD-ROM contains files in support of the 1995 USGS National assessment of United States oil and gas resources (DDS-30), which was published separately and summarizes the results of a 3-year study of the oil and gas resources of the onshore and state waters of the United States. The study describes about 560 oil and gas plays in the United States--confirmed and hypothetical, conventional and unconventional. A parallel study of the Federal offshore is being conducted by the U.S. Minerals Management Service. This CD-ROM contains files in multiple formats, so that almost any computer user can import them into word processors and mapping software packages. No proprietary data are released on this CD-ROM. The complete text of DDS-30 is also available, as well as many figures. A companion CD-ROM (DDS-36) includes the tabular data, the programs, and the same text data, but none of the map data.

  19. Open source OCR framework using mobile devices

    NASA Astrophysics Data System (ADS)

    Zhou, Steven Zhiying; Gilani, Syed Omer; Winkler, Stefan

    2008-02-01

    Mobile phones have evolved from passive one-to-one communication device to powerful handheld computing device. Today most new mobile phones are capable of capturing images, recording video, and browsing internet and do much more. Exciting new social applications are emerging on mobile landscape, like, business card readers, sing detectors and translators. These applications help people quickly gather the information in digital format and interpret them without the need of carrying laptops or tablet PCs. However with all these advancements we find very few open source software available for mobile phones. For instance currently there are many open source OCR engines for desktop platform but, to our knowledge, none are available on mobile platform. Keeping this in perspective we propose a complete text detection and recognition system with speech synthesis ability, using existing desktop technology. In this work we developed a complete OCR framework with subsystems from open source desktop community. This includes a popular open source OCR engine named Tesseract for text detection & recognition and Flite speech synthesis module, for adding text-to-speech ability.

  20. Computation of reliable textural indices from multimodal brain MRI: suggestions based on a study of patients with diffuse intrinsic pontine glioma.

    PubMed

    Goya-Outi, Jessica; Orlhac, Fanny; Calmon, Raphael; Alentorn, Agusti; Nioche, Christophe; Philippe, Cathy; Puget, Stéphanie; Boddaert, Nathalie; Buvat, Irène; Grill, Jacques; Frouin, Vincent; Frouin, Frederique

    2018-05-10

    Few methodological studies regarding widely used textural indices robustness in MRI have been reported. In this context, this study aims to propose some rules to compute reliable textural indices from multimodal 3D brain MRI. Diagnosis and post-biopsy MR scans including T1, post-contrast T1, T2 and FLAIR images from thirty children with diffuse intrinsic pontine glioma (DIPG) were considered. The hybrid white stripe method was adapted to standardize MR intensities. Sixty textural indices were then computed for each modality in different regions of interest (ROI), including tumor and white matter (WM). Three types of intensity binning were compared [Formula: see text]: constant bin width and relative bounds; [Formula: see text] constant number of bins and relative bounds; [Formula: see text] constant number of bins and absolute bounds. The impact of the volume of the region was also tested within the WM. First, the mean Hellinger distance between patient-based intensity distributions decreased by a factor greater than 10 in WM and greater than 2.5 in gray matter after standardization. Regarding the binning strategy, the ranking of patients was highly correlated for 188/240 features when comparing [Formula: see text] with [Formula: see text], but for only 20 when comparing [Formula: see text] with [Formula: see text], and nine when comparing [Formula: see text] with [Formula: see text]. Furthermore, when using [Formula: see text] or [Formula: see text] texture indices reflected tumor heterogeneity as assessed visually by experts. Last, 41 features presented statistically significant differences between contralateral WM regions when ROI size slightly varies across patients, and none when using ROI of the same size. For regions with similar size, 224 features were significantly different between WM and tumor. Valuable information from texture indices can be biased by methodological choices. Recommendations are to standardize intensities in MR brain volumes, to use intensity binning with constant bin width, and to define regions with the same volumes to get reliable textural indices.

  1. Automatic contour propagation using deformable image registration to determine delivered dose to spinal cord in head-and-neck cancer radiotherapy.

    PubMed

    Yeap, P L; Noble, D J; Harrison, K; Bates, A M; Burnet, N G; Jena, R; Romanchikova, M; Sutcliffe, M P F; Thomas, S J; Barnett, G C; Benson, R J; Jefferies, S J; Parker, M A

    2017-07-12

    To determine delivered dose to the spinal cord, a technique has been developed to propagate manual contours from kilovoltage computed-tomography (kVCT) scans for treatment planning to megavoltage computed-tomography (MVCT) guidance scans. The technique uses the Elastix software to perform intensity-based deformable image registration of each kVCT scan to the associated MVCT scans. The registration transform is then applied to contours of the spinal cord drawn manually on the kVCT scan, to obtain contour positions on the MVCT scans. Different registration strategies have been investigated, with performance evaluated by comparing the resulting auto-contours with manual contours, drawn by oncologists. The comparison metrics include the conformity index (CI), and the distance between centres (DBC). With optimised registration, auto-contours generally agree well with manual contours. Considering all 30 MVCT scans for each of three patients, the median CI is [Formula: see text], and the median DBC is ([Formula: see text]) mm. An intra-observer comparison for the same scans gives a median CI of [Formula: see text] and a DBC of ([Formula: see text]) mm. Good levels of conformity are also obtained when auto-contours are compared with manual contours from one observer for a single MVCT scan for each of 30 patients, and when they are compared with manual contours from six observers for two MVCT scans for each of three patients. Using the auto-contours to estimate organ position at treatment time, a preliminary study of 33 patients who underwent radiotherapy for head-and-neck cancers indicates good agreement between planned and delivered dose to the spinal cord.

  2. Meta-Generalis: A Novel Method for Structuring Information from Radiology Reports

    PubMed Central

    Barbosa, Flavio; Traina, Agma Jucci

    2016-01-01

    Summary Background A structured report for imaging exams aims at increasing the precision in information retrieval and communication between physicians. However, it is more concise than free text and may limit specialists’ descriptions of important findings not covered by pre-defined structures. A computational ontological structure derived from free texts designed by specialists may be a solution for this problem. Therefore, the goal of our study was to develop a methodology for structuring information in radiology reports covering specifications required for the Brazilian Portuguese language, including the terminology to be used. Methods We gathered 1,701 radiological reports of magnetic resonance imaging (MRI) studies of the lumbosacral spine from three different institutions. Techniques of text mining and ontological conceptualization of lexical units extracted were used to structure information. Ten radiologists, specialists in lumbosacral MRI, evaluated the textual superstructure and terminology extracted using an electronic questionnaire. Results The established methodology consists of six steps: 1) collection of radiology reports of a specific MRI examination; 2) textual decomposition; 3) normalization of lexical units; 4) identification of textual superstructures; 5) conceptualization of candidate-terms; and 6) evaluation of superstructures and extracted terminology by experts using an electronic questionnaire. Three different textual superstructures were identified, with terminological variations in the names of their textual categories. The number of candidate-terms conceptualized was 4,183, yielding 727 concepts. There were a total of 13,963 relationships between candidate-terms and concepts and 789 relationships among concepts. Conclusions The proposed methodology allowed structuring information in a more intuitive and practical way. Indications of three textual superstructures, extraction of lexicon units and the normalization and ontologically conceptualization were achieved while maintaining references to their respective categories and free text radiology reports. PMID:27580980

  3. Meta-generalis: A novel method for structuring information from radiology reports.

    PubMed

    Barbosa, Flavio; Traina, Agma Jucci; Muglia, Valdair Francisco

    2016-08-24

    A structured report for imaging exams aims at increasing the precision in information retrieval and communication between physicians. However, it is more concise than free text and may limit specialists' descriptions of important findings not covered by pre-defined structures. A computational ontological structure derived from free texts designed by specialists may be a solution for this problem. Therefore, the goal of our study was to develop a methodology for structuring information in radiology reports covering specifications required for the Brazilian Portuguese language, including the terminology to be used. We gathered 1,701 radiological reports of magnetic resonance imaging (MRI) studies of the lumbosacral spine from three different institutions. Techniques of text mining and ontological conceptualization of lexical units extracted were used to structure information. Ten radiologists, specialists in lumbosacral MRI, evaluated the textual superstructure and terminology extracted using an electronic questionnaire. The established methodology consists of six steps: 1) collection of radiology reports of a specific MRI examination; 2) textual decomposition; 3) normalization of lexical units; 4) identification of textual superstructures; 5) conceptualization of candidate-terms; and 6) evaluation of superstructures and extracted terminology by experts using an electronic questionnaire. Three different textual superstructures were identified, with terminological variations in the names of their textual categories. The number of candidate-terms conceptualized was 4,183, yielding 727 concepts. There were a total of 13,963 relationships between candidate-terms and concepts and 789 relationships among concepts. The proposed methodology allowed structuring information in a more intuitive and practical way. Indications of three textual superstructures, extraction of lexicon units and the normalization and ontologically conceptualization were achieved while maintaining references to their respective categories and free text radiology reports.

  4. --No Title--

    Science.gov Websites

    : -99999999px; } .ui-helper-reset { margin: 0; padding: 0; border: 0; outline: 0; line-height: 1.3; text ----------------------------------*/ /* states and images */ .ui-icon { display: block; text-indent: -99999px; overflow: hidden; background , .ui-state-default a:visited { color: #555555; text-decoration: none; } .ui-state-hover, .ui-widget

  5. Search of the Deep and Dark Web via DARPA Memex

    NASA Astrophysics Data System (ADS)

    Mattmann, C. A.

    2015-12-01

    Search has progressed through several stages due to the increasing size of the Web. Search engines first focused on text and its rate of occurrence; then focused on the notion of link analysis and citation then on interactivity and guided search; and now on the use of social media - who we interact with, what we comment on, and who we follow (and who follows us). The next stage, referred to as "deep search," requires solutions that can bring together text, images, video, importance, interactivity, and social media to solve this challenging problem. The Apache Nutch project provides an open framework for large-scale, targeted, vertical search with capabilities to support all past and potential future search engine foci. Nutch is a flexible infrastructure allowing open access to ranking; URL selection and filtering approaches, to the link graph generated from search, and Nutch has spawned entire sub communities including Apache Hadoop and Apache Tika. It addresses many current needs with the capability to support new technologies such as image and video. On the DARPA Memex project, we are creating create specific extensions to Nutch that will directly improve its overall technological superiority for search and that will directly allow us to address complex search problems including human trafficking. We are integrating state-of-the-art algorithms developed by Kitware for IARPA Aladdin combined with work by Harvard to provide image and video understanding support allowing automatic detection of people and things and massive deployment via Nutch. We are expanding Apache Tika for scene understanding, object/person detection and classification in images/video. We are delivering an interactive and visual interface for initiating Nutch crawls. The interface uses Python technologies to expose Nutch data and to provide a domain specific language for crawls. With the Bokeh visualization library the interface we are delivering simple interactive crawl visualization and plotting techniques for exploring crawled information. The platform classifies, identify, and thwart predators, help to find victims and to identify buyers in human trafficking and will deliver technological superiority in search engines for DARPA. We are already transitioning the technologies into Geo and Planetary Science, and Bioinformatics.

  6. Effect of cerebral spinal fluid suppression for diffusional kurtosis imaging.

    PubMed

    Yang, Alicia W; Jensen, Jens H; Hu, Caixia C; Tabesh, Ali; Falangola, Maria F; Helpern, Joseph A

    2013-02-01

    To evaluate the cerebral spinal fluid (CSF) partial volume effect on diffusional kurtosis imaging (DKI) metrics in white matter and cortical gray matter. Four healthy volunteers participated in this study. Standard DKI and fluid-attenuated inversion recovery (FLAIR) DKI experiments were performed using a twice-refocused-spin-echo diffusion sequence. The conventional diffusion tensor imaging (DTI) metrics of fractional anisotropy (FA), mean, axial, and radial diffusivity (MD, D[symbol in text], D[symbol in text] together with DKI metrics of mean, axial, and radial kurtosis (MK, K[symbol in text], K[symbol in text], were measured and compared. Single image slices located above the lateral ventricles, with similar anatomical features for each subject, were selected to minimize the effect of CSF from the ventricles. In white matter, differences of less than 10% were observed between diffusion metrics measured with standard DKI and FLAIR-DKI sequences, suggesting minimal CSF contamination. For gray matter, conventional DTI metrics differed by 19% to 52%, reflecting significant CSF partial volume effects. Kurtosis metrics, however, changed by 11% or less, indicating greater robustness with respect to CSF contamination. Kurtosis metrics are less sensitive to CSF partial voluming in cortical gray matter than conventional diffusion metrics. The kurtosis metrics may then be more specific indicators of changes in tissue microstructure, provided the effect sizes for the changes are comparable. Copyright © 2012 Wiley Periodicals, Inc.

  7. Duplicate document detection in DocBrowse

    NASA Astrophysics Data System (ADS)

    Chalana, Vikram; Bruce, Andrew G.; Nguyen, Thien

    1998-04-01

    Duplicate documents are frequently found in large databases of digital documents, such as those found in digital libraries or in the government declassification effort. Efficient duplicate document detection is important not only to allow querying for similar documents, but also to filter out redundant information in large document databases. We have designed three different algorithm to identify duplicate documents. The first algorithm is based on features extracted from the textual content of a document, the second algorithm is based on wavelet features extracted from the document image itself, and the third algorithm is a combination of the first two. These algorithms are integrated within the DocBrowse system for information retrieval from document images which is currently under development at MathSoft. DocBrowse supports duplicate document detection by allowing (1) automatic filtering to hide duplicate documents, and (2) ad hoc querying for similar or duplicate documents. We have tested the duplicate document detection algorithms on 171 documents and found that text-based method has an average 11-point precision of 97.7 percent while the image-based method has an average 11- point precision of 98.9 percent. However, in general, the text-based method performs better when the document contains enough high-quality machine printed text while the image- based method performs better when the document contains little or no quality machine readable text.

  8. Image Processing: A State-of-the-Art Way to Learn Science.

    ERIC Educational Resources Information Center

    Raphael, Jacqueline; Greenberg, Richard

    1995-01-01

    Teachers participating in the Image Processing for Teaching Process, begun at the University of Arizona's Lunar and Planetary Laboratory in 1989, find this technology ideal for encouraging student discovery, promoting constructivist science or math experiences, and adapting in classrooms. Because image processing is not a computerized text, it…

  9. 10 CFR 2.1011 - Management of electronic information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... participants shall make textual (or, where non-text, image) versions of their documents available on a web... of the following acceptable formats: ASCII, native word processing (Word, WordPerfect), PDF Normal, or HTML. (iv) Image files must be formatted as TIFF CCITT G4 for bi-tonal images or PNG (Portable...

  10. 10 CFR 2.1011 - Management of electronic information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... participants shall make textual (or, where non-text, image) versions of their documents available on a web... of the following acceptable formats: ASCII, native word processing (Word, WordPerfect), PDF Normal, or HTML. (iv) Image files must be formatted as TIFF CCITT G4 for bi-tonal images or PNG (Portable...

  11. Mental Imagery, Text Illustrations, and Children's Story Comprehension and Recall.

    ERIC Educational Resources Information Center

    Gambrell, Linda B.; Jawitz, Paula Brooks

    1993-01-01

    Investigates the effects of instructions to induce mental imagery and attend to text illustrations on fourth graders' reading comprehension and recall of narrative text. Finds that images and illustrations independently enhanced reading performance and that, in combination, these two strategies resulted in impressive increases in children's…

  12. Choosing and Using Text-to-Speech Software

    ERIC Educational Resources Information Center

    Peters, Tom; Bell, Lori

    2007-01-01

    This article describes a computer-based technology for generating speech called text-to-speech (TTS). This software is ready for widespread use by libraries, other organizations, and individual users. It offers the affordable ability to turn just about any electronic text that is not image-based into an artificially spoken communication. The…

  13. Balkan Identity: Changing Self-Images of the South Slavs

    ERIC Educational Resources Information Center

    Saric, Ljiljana

    2004-01-01

    This paper provides an analysis of texts containing the noun "the Balkans" and the adjective "Balkan" in a small corpus of approximately 80 journalistic texts from different south Slavic regions currently available online. The texts were published over the last 10 years. The term "the Balkans" and its derivations…

  14. Generating realistic images using Kray

    NASA Astrophysics Data System (ADS)

    Tanski, Grzegorz

    2004-07-01

    Kray is an application for creating realistic images. It is written in C++ programming language, has a text-based interface, solves global illumination problem using techniques such as radiosity, path tracing and photon mapping.

  15. Responsive Image Inline Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeman, Ian

    2016-10-20

    RIIF is a contributed module for the Drupal php web application framework (drupal.org). It is written as a helper or sub-module of other code which is part of version 8 "core Drupal" and is intended to extend its functionality. It allows Drupal to resize images uploaded through the user-facing text editor within the Drupal GUI (a.k.a. "inline images") for various browser widths. This resizing is already done foe other images through the parent "Responsive Image" core module. This code extends that functionality to inline images.

  16. Geometric modeling of hepatic arteries in 3D ultrasound with unsupervised MRA fusion during liver interventions.

    PubMed

    Gérard, Maxime; Michaud, François; Bigot, Alexandre; Tang, An; Soulez, Gilles; Kadoury, Samuel

    2017-06-01

    Modulating the chemotherapy injection rate with regard to blood flow velocities in the tumor-feeding arteries during intra-arterial therapies may help improve liver tumor targeting while decreasing systemic exposure. These velocities can be obtained noninvasively using Doppler ultrasound (US). However, small vessels situated in the liver are difficult to identify and follow in US. We propose a multimodal fusion approach that non-rigidly registers a 3D geometric mesh model of the hepatic arteries obtained from preoperative MR angiography (MRA) acquisitions with intra-operative 3D US imaging. The proposed fusion tool integrates 3 imaging modalities: an arterial MRA, a portal phase MRA and an intra-operative 3D US. Preoperatively, the arterial phase MRA is used to generate a 3D model of the hepatic arteries, which is then non-rigidly co-registered with the portal phase MRA. Once the intra-operative 3D US is acquired, we register it with the portal MRA using a vessel-based rigid initialization followed by a non-rigid registration using an image-based metric based on linear correlation of linear combination. Using the combined non-rigid transformation matrices, the 3D mesh model is fused with the 3D US. 3D US and multi-phase MRA images acquired from 10 porcine models were used to test the performance of the proposed fusion tool. Unimodal registration of the MRA phases yielded a target registration error (TRE) of [Formula: see text] mm. Initial rigid alignment of the portal MRA and 3D US yielded a mean TRE of [Formula: see text] mm, which was significantly reduced to [Formula: see text] mm ([Formula: see text]) after affine image-based registration. The following deformable registration step allowed for further decrease of the mean TRE to [Formula: see text] mm. The proposed tool could facilitate visualization and localization of these vessels when using 3D US intra-operatively for either intravascular or percutaneous interventions to avoid vessel perforation.

  17. Three dimensional reconstruction of therapeutic carbon ion beams in phantoms using single secondary ion tracks.

    PubMed

    Reinhart, Anna Merle; Spindeldreier, Claudia Katharina; Jakubek, Jan; Martišíková, Mária

    2017-06-21

    Carbon ion beam radiotherapy enables a very localised dose deposition. However, even small changes in the patient geometry or positioning errors can significantly distort the dose distribution. A live, non-invasive monitoring system of the beam delivery within the patient is therefore highly desirable, and could improve patient treatment. We present a novel three-dimensional method for imaging the beam in the irradiated object, exploiting the measured tracks of single secondary ions emerging under irradiation. The secondary particle tracks are detected with a TimePix stack-a set of parallel pixelated semiconductor detectors. We developed a three-dimensional reconstruction algorithm based on maximum likelihood expectation maximization. We demonstrate the applicability of the new method in the irradiation of a cylindrical PMMA phantom of human head size with a carbon ion pencil beam of [Formula: see text] MeV u -1 . The beam image in the phantom is reconstructed from a set of nine discrete detector positions between [Formula: see text] and [Formula: see text] from the beam axis. Furthermore, we demonstrate the potential to visualize inhomogeneities by irradiating a PMMA phantom with an air gap as well as bone and adipose tissue surrogate inserts. We successfully reconstructed a three-dimensional image of the treatment beam in the phantom from single secondary ion tracks. The beam image corresponds well to the beam direction and energy. In addition, cylindrical inhomogeneities with a diameter of [Formula: see text] cm and density differences down to [Formula: see text] g cm -3 to the surrounding material are clearly visualized. This novel three-dimensional method to image a therapeutic carbon ion beam in the irradiated object does not interfere with the treatment and requires knowledge only of single secondary ion tracks. Even with detectors with only a small angular coverage, the three-dimensional reconstruction of the fragmentation points presented in this work was found to be feasible.

  18. Recognizable or Not: Towards Image Semantic Quality Assessment for Compression

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Dandan; Li, Houqiang

    2017-12-01

    Traditionally, image compression was optimized for the pixel-wise fidelity or the perceptual quality of the compressed images given a bit-rate budget. But recently, compressed images are more and more utilized for automatic semantic analysis tasks such as recognition and retrieval. For these tasks, we argue that the optimization target of compression is no longer perceptual quality, but the utility of the compressed images in the given automatic semantic analysis task. Accordingly, we propose to evaluate the quality of the compressed images neither at pixel level nor at perceptual level, but at semantic level. In this paper, we make preliminary efforts towards image semantic quality assessment (ISQA), focusing on the task of optical character recognition (OCR) from compressed images. We propose a full-reference ISQA measure by comparing the features extracted from text regions of original and compressed images. We then propose to integrate the ISQA measure into an image compression scheme. Experimental results show that our proposed ISQA measure is much better than PSNR and SSIM in evaluating the semantic quality of compressed images; accordingly, adopting our ISQA measure to optimize compression for OCR leads to significant bit-rate saving compared to using PSNR or SSIM. Moreover, we perform subjective test about text recognition from compressed images, and observe that our ISQA measure has high consistency with subjective recognizability. Our work explores new dimensions in image quality assessment, and demonstrates promising direction to achieve higher compression ratio for specific semantic analysis tasks.

  19. Retrieval of Sentence Sequences for an Image Stream via Coherence Recurrent Convolutional Networks.

    PubMed

    Park, Cesc Chunseong; Kim, Youngjin; Kim, Gunhee

    2018-04-01

    We propose an approach for retrieving a sequence of natural sentences for an image stream. Since general users often take a series of pictures on their experiences, much online visual information exists in the form of image streams, for which it would better take into consideration of the whole image stream to produce natural language descriptions. While almost all previous studies have dealt with the relation between a single image and a single natural sentence, our work extends both input and output dimension to a sequence of images and a sequence of sentences. For retrieving a coherent flow of multiple sentences for a photo stream, we propose a multimodal neural architecture called coherence recurrent convolutional network (CRCN), which consists of convolutional neural networks, bidirectional long short-term memory (LSTM) networks, and an entity-based local coherence model. Our approach directly learns from vast user-generated resource of blog posts as text-image parallel training data. We collect more than 22 K unique blog posts with 170 K associated images for the travel topics of NYC, Disneyland , Australia, and Hawaii. We demonstrate that our approach outperforms other state-of-the-art image captioning methods for text sequence generation, using both quantitative measures and user studies via Amazon Mechanical Turk.

  20. An image-processing method to detect sub-optical features based on understanding noise in intensity measurements.

    PubMed

    Bhatia, Tripta

    2018-07-01

    Accurate quantitative analysis of image data requires that we distinguish between fluorescence intensity (true signal) and the noise inherent to its measurements to the extent possible. We image multilamellar membrane tubes and beads that grow from defects in the fluid lamellar phase of the lipid 1,2-dioleoyl-sn-glycero-3-phosphocholine dissolved in water and water-glycerol mixtures by using fluorescence confocal polarizing microscope. We quantify image noise and determine the noise statistics. Understanding the nature of image noise also helps in optimizing image processing to detect sub-optical features, which would otherwise remain hidden. We use an image-processing technique "optimum smoothening" to improve the signal-to-noise ratio of features of interest without smearing their structural details. A high SNR renders desired positional accuracy with which it is possible to resolve features of interest with width below optical resolution. Using optimum smoothening, the smallest and the largest core diameter detected is of width [Formula: see text] and [Formula: see text] nm, respectively, discussed in this paper. The image-processing and analysis techniques and the noise modeling discussed in this paper can be used for detailed morphological analysis of features down to sub-optical length scales that are obtained by any kind of fluorescence intensity imaging in the raster mode.

Top