Combination of Sharing Matrix and Image Encryption for Lossless $(k,n)$ -Secret Image Sharing.
Bao, Long; Yi, Shuang; Zhou, Yicong
2017-12-01
This paper first introduces a (k,n) -sharing matrix S (k, n) and its generation algorithm. Mathematical analysis is provided to show its potential for secret image sharing. Combining sharing matrix with image encryption, we further propose a lossless (k,n) -secret image sharing scheme (SMIE-SIS). Only with no less than k shares, all the ciphertext information and security key can be reconstructed, which results in a lossless recovery of original information. This can be proved by the correctness and security analysis. Performance evaluation and security analysis demonstrate that the proposed SMIE-SIS with arbitrary settings of k and n has at least five advantages: 1) it is able to fully recover the original image without any distortion; 2) it has much lower pixel expansion than many existing methods; 3) its computation cost is much lower than the polynomial-based secret image sharing methods; 4) it is able to verify and detect a fake share; and 5) even using the same original image with the same initial settings of parameters, every execution of SMIE-SIS is able to generate completely different secret shares that are unpredictable and non-repetitive. This property offers SMIE-SIS a high level of security to withstand many different attacks.
Vest, Joshua R; Jung, Hye-Young; Ostrovsky, Aaron; Das, Lala Tanmoy; McGinty, Geraldine B
2015-12-01
Image sharing technologies may reduce unneeded imaging by improving provider access to imaging information. A systematic review and meta-analysis were conducted to summarize the impact of image sharing technologies on patient imaging utilization. Quantitative evaluations of the effects of PACS, regional image exchange networks, interoperable electronic heath records, tools for importing physical media, and health information exchange systems on utilization were identified through a systematic review of the published and gray English-language literature (2004-2014). Outcomes, standard effect sizes (ESs), settings, technology, populations, and risk of bias were abstracted from each study. The impact of image sharing technologies was summarized with random-effects meta-analysis and meta-regression models. A total of 17 articles were included in the review, with a total of 42 different studies. Image sharing technology was associated with a significant decrease in repeat imaging (pooled effect size [ES] = -0.17; 95% confidence interval [CI] = [-0.25, -0.09]; P < .001). However, image sharing technology was associated with a significant increase in any imaging utilization (pooled ES = 0.20; 95% CI = [0.07, 0.32]; P = .002). For all outcomes combined, image sharing technology was not associated with utilization. Most studies were at risk for bias. Image sharing technology was associated with reductions in repeat and unnecessary imaging, in both the overall literature and the most-rigorous studies. Stronger evidence is needed to further explore the role of specific technologies and their potential impact on various modalities, patient populations, and settings. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Vest, Joshua R.; Jung, Hye-Young; Ostrovsky, Aaron; Das, Lala Tanmoy; McGinty, Geraldine B.
2016-01-01
Introduction Image sharing technologies may reduce unneeded imaging by improving provider access to imaging information. A systematic review and meta-analysis were conducted to summarize the impact of image sharing technologies on patient imaging utilization. Methods Quantitative evaluations of the effects of PACS, regional image exchange networks, interoperable electronic heath records, tools for importing physical media, and health information exchange systems on utilization were identified through a systematic review of the published and gray English-language literature (2004–2014). Outcomes, standard effect sizes (ESs), settings, technology, populations, and risk of bias were abstracted from each study. The impact of image sharing technologies was summarized with random-effects meta-analysis and meta-regression models. Results A total of 17 articles were included in the review, with a total of 42 different studies. Image sharing technology was associated with a significant decrease in repeat imaging (pooled effect size [ES] = −0.17; 95% confidence interval [CI] = [−0.25, −0.09]; P < .001). However, image sharing technology was associated with a significant increase in any imaging utilization (pooled ES = 0.20; 95% CI = [0.07, 0.32]; P = .002). For all outcomes combined, image sharing technology was not associated with utilization. Most studies were at risk for bias. Conclusions Image sharing technology was associated with reductions in repeat and unnecessary imaging, in both the overall literature and the most-rigorous studies. Stronger evidence is needed to further explore the role of specific technologies and their potential impact on various modalities, patient populations, and settings. PMID:26614882
A picture tells a thousand words: A content analysis of concussion-related images online.
Ahmed, Osman H; Lee, Hopin; Struik, Laura L
2016-09-01
Recently image-sharing social media platforms have become a popular medium for sharing health-related images and associated information. However within the field of sports medicine, and more specifically sports related concussion, the content of images and meta-data shared through these popular platforms have not been investigated. The aim of this study was to analyse the content of concussion-related images and its accompanying meta-data on image-sharing social media platforms. We retrieved 300 images from Pinterest, Instagram and Flickr by using a standardised search strategy. All images were screened and duplicate images were removed. We excluded images if they were: non-static images; illustrations; animations; or screenshots. The content and characteristics of each image was evaluated using a customised coding scheme to determine major content themes, and images were referenced to the current international concussion management guidelines. From 300 potentially relevant images, 176 images were included for analysis; 70 from Pinterest, 63 from Flickr, and 43 from Instagram. Most images were of another person or a scene (64%), with the primary content depicting injured individuals (39%). The primary purposes of the images were to share a concussion-related incident (33%) and to dispense education (19%). For those images where it could be evaluated, the majority (91%) were found to reflect the Sports Concussion Assessment Tool 3 (SCAT3) guidelines. The ability to rapidly disseminate rich information though photos, images, and infographics to a wide-reaching audience suggests that image-sharing social media platforms could be used as an effective communication tool for sports concussion. Public health strategies could direct educative content to targeted populations via the use of image-sharing platforms. Further research is required to understand how image-sharing platforms can be used to effectively relay evidence-based information to patients and sports medicine clinicians. Copyright © 2016 Elsevier Ltd. All rights reserved.
The Open Microscopy Environment: open image informatics for the biological sciences
NASA Astrophysics Data System (ADS)
Blackburn, Colin; Allan, Chris; Besson, Sébastien; Burel, Jean-Marie; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gault, David; Gillen, Kenneth; Leigh, Roger; Leo, Simone; Li, Simon; Lindner, Dominik; Linkert, Melissa; Moore, Josh; Moore, William J.; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Swedlow, Jason R.
2016-07-01
Despite significant advances in biological imaging and analysis, major informatics challenges remain unsolved: file formats are proprietary, storage and analysis facilities are lacking, as are standards for sharing image data and results. While the open FITS file format is ubiquitous in astronomy, astronomical imaging shares many challenges with biological imaging, including the need to share large image sets using secure, cross-platform APIs, and the need for scalable applications for processing and visualization. The Open Microscopy Environment (OME) is an open-source software framework developed to address these challenges. OME tools include: an open data model for multidimensional imaging (OME Data Model); an open file format (OME-TIFF) and library (Bio-Formats) enabling free access to images (5D+) written in more than 145 formats from many imaging domains, including FITS; and a data management server (OMERO). The Java-based OMERO client-server platform comprises an image metadata store, an image repository, visualization and analysis by remote access, allowing sharing and publishing of image data. OMERO provides a means to manage the data through a multi-platform API. OMERO's model-based architecture has enabled its extension into a range of imaging domains, including light and electron microscopy, high content screening, digital pathology and recently into applications using non-image data from clinical and genomic studies. This is made possible using the Bio-Formats library. The current release includes a single mechanism for accessing image data of all types, regardless of original file format, via Java, C/C++ and Python and a variety of applications and environments (e.g. ImageJ, Matlab and R).
Vaccine Images on Twitter: Analysis of What Images are Shared
Dredze, Mark
2018-01-01
Background Visual imagery plays a key role in health communication; however, there is little understanding of what aspects of vaccine-related images make them effective communication aids. Twitter, a popular venue for discussions related to vaccination, provides numerous images that are shared with tweets. Objective The objectives of this study were to understand how images are used in vaccine-related tweets and provide guidance with respect to the characteristics of vaccine-related images that correlate with the higher likelihood of being retweeted. Methods We collected more than one million vaccine image messages from Twitter and characterized various properties of these images using automated image analytics. We fit a logistic regression model to predict whether or not a vaccine image tweet was retweeted, thus identifying characteristics that correlate with a higher likelihood of being shared. For comparison, we built similar models for the sharing of vaccine news on Facebook and for general image tweets. Results Most vaccine-related images are duplicates (125,916/237,478; 53.02%) or taken from other sources, not necessarily created by the author of the tweet. Almost half of the images contain embedded text, and many include images of people and syringes. The visual content is highly correlated with a tweet’s textual topics. Vaccine image tweets are twice as likely to be shared as nonimage tweets. The sentiment of an image and the objects shown in the image were the predictive factors in determining whether an image was retweeted. Conclusions We are the first to study vaccine images on Twitter. Our findings suggest future directions for the study and use of vaccine imagery and may inform communication strategies around vaccination. Furthermore, our study demonstrates an effective study methodology for image analysis. PMID:29615386
Vaccine Images on Twitter: Analysis of What Images are Shared.
Chen, Tao; Dredze, Mark
2018-04-03
Visual imagery plays a key role in health communication; however, there is little understanding of what aspects of vaccine-related images make them effective communication aids. Twitter, a popular venue for discussions related to vaccination, provides numerous images that are shared with tweets. The objectives of this study were to understand how images are used in vaccine-related tweets and provide guidance with respect to the characteristics of vaccine-related images that correlate with the higher likelihood of being retweeted. We collected more than one million vaccine image messages from Twitter and characterized various properties of these images using automated image analytics. We fit a logistic regression model to predict whether or not a vaccine image tweet was retweeted, thus identifying characteristics that correlate with a higher likelihood of being shared. For comparison, we built similar models for the sharing of vaccine news on Facebook and for general image tweets. Most vaccine-related images are duplicates (125,916/237,478; 53.02%) or taken from other sources, not necessarily created by the author of the tweet. Almost half of the images contain embedded text, and many include images of people and syringes. The visual content is highly correlated with a tweet's textual topics. Vaccine image tweets are twice as likely to be shared as nonimage tweets. The sentiment of an image and the objects shown in the image were the predictive factors in determining whether an image was retweeted. We are the first to study vaccine images on Twitter. Our findings suggest future directions for the study and use of vaccine imagery and may inform communication strategies around vaccination. Furthermore, our study demonstrates an effective study methodology for image analysis. ©Tao Chen, Mark Dredze. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 03.04.2018.
2000-09-30
networks (LAN), (3) quantifying size, shape, and other parameters of plankton cells and colonies via image analysis and image reconstruction, and (4) creating educational materials (e.g. lectures, videos etc.).
NASA Astrophysics Data System (ADS)
Li, Xianye; Meng, Xiangfeng; Wang, Yurong; Yang, Xiulun; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi
2017-09-01
A multiple-image encryption method is proposed that is based on row scanning compressive ghost imaging, (t, n) threshold secret sharing, and phase retrieval in the Fresnel domain. In the encryption process, after wavelet transform and Arnold transform of the target image, the ciphertext matrix can be first detected using a bucket detector. Based on a (t, n) threshold secret sharing algorithm, the measurement key used in the row scanning compressive ghost imaging can be decomposed and shared into two pairs of sub-keys, which are then reconstructed using two phase-only mask (POM) keys with fixed pixel values, placed in the input plane and transform plane 2 of the phase retrieval scheme, respectively; and the other POM key in the transform plane 1 can be generated and updated by the iterative encoding of each plaintext image. In each iteration, the target image acts as the input amplitude constraint in the input plane. During decryption, each plaintext image possessing all the correct keys can be successfully decrypted by measurement key regeneration, compression algorithm reconstruction, inverse wavelet transformation, and Fresnel transformation. Theoretical analysis and numerical simulations both verify the feasibility of the proposed method.
Informatics methods to enable sharing of quantitative imaging research data.
Levy, Mia A; Freymann, John B; Kirby, Justin S; Fedorov, Andriy; Fennessy, Fiona M; Eschrich, Steven A; Berglund, Anders E; Fenstermacher, David A; Tan, Yongqiang; Guo, Xiaotao; Casavant, Thomas L; Brown, Bartley J; Braun, Terry A; Dekker, Andre; Roelofs, Erik; Mountz, James M; Boada, Fernando; Laymon, Charles; Oborski, Matt; Rubin, Daniel L
2012-11-01
The National Cancer Institute Quantitative Research Network (QIN) is a collaborative research network whose goal is to share data, algorithms and research tools to accelerate quantitative imaging research. A challenge is the variability in tools and analysis platforms used in quantitative imaging. Our goal was to understand the extent of this variation and to develop an approach to enable sharing data and to promote reuse of quantitative imaging data in the community. We performed a survey of the current tools in use by the QIN member sites for representation and storage of their QIN research data including images, image meta-data and clinical data. We identified existing systems and standards for data sharing and their gaps for the QIN use case. We then proposed a system architecture to enable data sharing and collaborative experimentation within the QIN. There are a variety of tools currently used by each QIN institution. We developed a general information system architecture to support the QIN goals. We also describe the remaining architecture gaps we are developing to enable members to share research images and image meta-data across the network. As a research network, the QIN will stimulate quantitative imaging research by pooling data, algorithms and research tools. However, there are gaps in current functional requirements that will need to be met by future informatics development. Special attention must be given to the technical requirements needed to translate these methods into the clinical research workflow to enable validation and qualification of these novel imaging biomarkers. Copyright © 2012 Elsevier Inc. All rights reserved.
Application of homomorphism to secure image sharing
NASA Astrophysics Data System (ADS)
Islam, Naveed; Puech, William; Hayat, Khizar; Brouzet, Robert
2011-09-01
In this paper, we present a new approach for sharing images between l players by exploiting the additive and multiplicative homomorphic properties of two well-known public key cryptosystems, i.e. RSA and Paillier. Contrary to the traditional schemes, the proposed approach employs secret sharing in a way that limits the influence of the dealer over the protocol and allows each player to participate with the help of his key-image. With the proposed approach, during the encryption step, each player encrypts his own key-image using the dealer's public key. The dealer encrypts the secret-to-be-shared image with the same public key and then, the l encrypted key-images plus the encrypted to-be shared image are multiplied homomorphically to get another encrypted image. After this step, the dealer can safely get a scrambled image which corresponds to the addition or multiplication of the l + 1 original images ( l key-images plus the secret image) because of the additive homomorphic property of the Paillier algorithm or multiplicative homomorphic property of the RSA algorithm. When the l players want to extract the secret image, they do not need to use keys and the dealer has no role. Indeed, with our approach, to extract the secret image, the l players need only to subtract their own key-image with no specific order from the scrambled image. Thus, the proposed approach provides an opportunity to use operators like multiplication on encrypted images for the development of a secure privacy preserving protocol in the image domain. We show that it is still possible to extract a visible version of the secret image with only l-1 key-images (when one key-image is missing) or when the l key-images used for the extraction are different from the l original key-images due to a lossy compression for example. Experimental results and security analysis verify and prove that the proposed approach is secure from cryptographic viewpoint.
COINSTAC: Decentralizing the future of brain imaging analysis
Ming, Jing; Verner, Eric; Sarwate, Anand; Kelly, Ross; Reed, Cory; Kahleck, Torran; Silva, Rogers; Panta, Sandeep; Turner, Jessica; Plis, Sergey; Calhoun, Vince
2017-01-01
In the era of Big Data, sharing neuroimaging data across multiple sites has become increasingly important. However, researchers who want to engage in centralized, large-scale data sharing and analysis must often contend with problems such as high database cost, long data transfer time, extensive manual effort, and privacy issues for sensitive data. To remove these barriers to enable easier data sharing and analysis, we introduced a new, decentralized, privacy-enabled infrastructure model for brain imaging data called COINSTAC in 2016. We have continued development of COINSTAC since this model was first introduced. One of the challenges with such a model is adapting the required algorithms to function within a decentralized framework. In this paper, we report on how we are solving this problem, along with our progress on several fronts, including additional decentralized algorithms implementation, user interface enhancement, decentralized regression statistic calculation, and complete pipeline specifications. PMID:29123643
Fiji: an open-source platform for biological-image analysis.
Schindelin, Johannes; Arganda-Carreras, Ignacio; Frise, Erwin; Kaynig, Verena; Longair, Mark; Pietzsch, Tobias; Preibisch, Stephan; Rueden, Curtis; Saalfeld, Stephan; Schmid, Benjamin; Tinevez, Jean-Yves; White, Daniel James; Hartenstein, Volker; Eliceiri, Kevin; Tomancak, Pavel; Cardona, Albert
2012-06-28
Fiji is a distribution of the popular open-source software ImageJ focused on biological-image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image-processing algorithms. Fiji facilitates the transformation of new algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities.
Image Sharing in Radiology-A Primer.
Chatterjee, Arindam R; Stalcup, Seth; Sharma, Arjun; Sato, T Shawn; Gupta, Pushpender; Lee, Yueh Z; Malone, Christopher; McBee, Morgan; Hotaling, Elise L; Kansagra, Akash P
2017-03-01
By virtue of its information technology-oriented infrastructure, the specialty of radiology is uniquely positioned to be at the forefront of efforts to promote data sharing across the healthcare enterprise, including particularly image sharing. The potential benefits of image sharing for clinical, research, and educational applications in radiology are immense. In this work, our group-the Association of University Radiologists (AUR) Radiology Research Alliance Task Force on Image Sharing-reviews the benefits of implementing image sharing capability, introduces current image sharing platforms and details their unique requirements, and presents emerging platforms that may see greater adoption in the future. By understanding this complex ecosystem of image sharing solutions, radiologists can become important advocates for the successful implementation of these powerful image sharing resources. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Container-Based Clinical Solutions for Portable and Reproducible Image Analysis.
Matelsky, Jordan; Kiar, Gregory; Johnson, Erik; Rivera, Corban; Toma, Michael; Gray-Roncal, William
2018-05-08
Medical imaging analysis depends on the reproducibility of complex computation. Linux containers enable the abstraction, installation, and configuration of environments so that software can be both distributed in self-contained images and used repeatably by tool consumers. While several initiatives in neuroimaging have adopted approaches for creating and sharing more reliable scientific methods and findings, Linux containers are not yet mainstream in clinical settings. We explore related technologies and their efficacy in this setting, highlight important shortcomings, demonstrate a simple use-case, and endorse the use of Linux containers for medical image analysis.
#LancerHealth: Using Twitter and Instagram as a tool in a campus wide health promotion initiative.
Santarossa, Sara; Woodruff, Sarah J
2018-02-05
The present study aimed to explore using popular technology that people already have/use as a health promotion tool, in a campus wide social media health promotion initiative, entitled #LancerHealth . During a two-week period the university community was asked to share photos on Twitter and Instagram of What does being healthy on campus look like to you ?, while tagging the image with #LancerHealth . All publically tagged media was collected using the Netlytic software and analysed. Text analysis (N=234 records, Twitter; N=141 records, Instagram) revealed that the majority of the conversation was positive and focused on health and the university. Social network analysis, based on five network properties, showed a small network with little interaction. Lastly, photo coding analysis (N=71 unique image) indicated that the majority of the shared images were of physical activity (52%) and on campus (80%). Further research into this area is warranted.
An Analysis of Medical Imaging Costs in Military Treatment Facilities
2014-09-01
authority to completely control the medical systems of each service, the DHA 7 was given management responsibility for specific shared services , functions...efficient health operations through enhanced enterprise-wide shared services . • Deliver more comprehensive primary care and integrated health...of shared services that will fall under central control: • facility planning • medical logistics • health information technology • Tricare health
ImageJ: Image processing and analysis in Java
NASA Astrophysics Data System (ADS)
Rasband, W. S.
2012-06-01
ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.
OpenMSI: A High-Performance Web-Based Platform for Mass Spectrometry Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubel, Oliver; Greiner, Annette; Cholia, Shreyas
Mass spectrometry imaging (MSI) enables researchers to directly probe endogenous molecules directly within the architecture of the biological matrix. Unfortunately, efficient access, management, and analysis of the data generated by MSI approaches remain major challenges to this rapidly developing field. Despite the availability of numerous dedicated file formats and software packages, it is a widely held viewpoint that the biggest challenge is simply opening, sharing, and analyzing a file without loss of information. Here we present OpenMSI, a software framework and platform that addresses these challenges via an advanced, high-performance, extensible file format and Web API for remote data accessmore » (http://openmsi.nersc.gov). The OpenMSI file format supports storage of raw MSI data, metadata, and derived analyses in a single, self-describing format based on HDF5 and is supported by a large range of analysis software (e.g., Matlab and R) and programming languages (e.g., C++, Fortran, and Python). Careful optimization of the storage layout of MSI data sets using chunking, compression, and data replication accelerates common, selective data access operations while minimizing data storage requirements and are critical enablers of rapid data I/O. The OpenMSI file format has shown to provide >2000-fold improvement for image access operations, enabling spectrum and image retrieval in less than 0.3 s across the Internet even for 50 GB MSI data sets. To make remote high-performance compute resources accessible for analysis and to facilitate data sharing and collaboration, we describe an easy-to-use yet powerful Web API, enabling fast and convenient access to MSI data, metadata, and derived analysis results stored remotely to facilitate high-performance data analysis and enable implementation of Web based data sharing, visualization, and analysis.« less
An Image Secret Sharing Method
2006-07-01
the secret image in lossless manner and (2) any or fewer image shares cannot get sufficient information to reveal the ... secret image. It is an effective, reliable and secure method to prevent the secret image from being lost, stolen or corrupted. In comparison with...other image secret sharing methods, this approach’s advantages are its large compression rate on the size of the image shares, its strong protection of the secret image and its ability for real-time
Cognitive approaches for patterns analysis and security applications
NASA Astrophysics Data System (ADS)
Ogiela, Marek R.; Ogiela, Lidia
2017-08-01
In this paper will be presented new opportunities for developing innovative solutions for semantic pattern classification and visual cryptography, which will base on cognitive and bio-inspired approaches. Such techniques can be used for evaluation of the meaning of analyzed patterns or encrypted information, and allow to involve such meaning into the classification task or encryption process. It also allows using some crypto-biometric solutions to extend personalized cryptography methodologies based on visual pattern analysis. In particular application of cognitive information systems for semantic analysis of different patterns will be presented, and also a novel application of such systems for visual secret sharing will be described. Visual shares for divided information can be created based on threshold procedure, which may be dependent on personal abilities to recognize some image details visible on divided images.
Rubel, Oliver; Bowen, Benjamin P
2018-01-01
Mass spectrometry imaging (MSI) is a transformative imaging method that supports the untargeted, quantitative measurement of the chemical composition and spatial heterogeneity of complex samples with broad applications in life sciences, bioenergy, and health. While MSI data can be routinely collected, its broad application is currently limited by the lack of easily accessible analysis methods that can process data of the size, volume, diversity, and complexity generated by MSI experiments. The development and application of cutting-edge analytical methods is a core driver in MSI research for new scientific discoveries, medical diagnostics, and commercial-innovation. However, the lack of means to share, apply, and reproduce analyses hinders the broad application, validation, and use of novel MSI analysis methods. To address this central challenge, we introduce the Berkeley Analysis and Storage Toolkit (BASTet), a novel framework for shareable and reproducible data analysis that supports standardized data and analysis interfaces, integrated data storage, data provenance, workflow management, and a broad set of integrated tools. Based on BASTet, we describe the extension of the OpenMSI mass spectrometry imaging science gateway to enable web-based sharing, reuse, analysis, and visualization of data analyses and derived data products. We demonstrate the application of BASTet and OpenMSI in practice to identify and compare characteristic substructures in the mouse brain based on their chemical composition measured via MSI.
On-road anomaly detection by multimodal sensor analysis and multimedia processing
NASA Astrophysics Data System (ADS)
Orhan, Fatih; Eren, P. E.
2014-03-01
The use of smartphones in Intelligent Transportation Systems is gaining popularity, yet many challenges exist in developing functional applications. Due to the dynamic nature of transportation, vehicular social applications face complexities such as developing robust sensor management, performing signal and image processing tasks, and sharing information among users. This study utilizes a multimodal sensor analysis framework which enables the analysis of sensors in multimodal aspect. It also provides plugin-based analyzing interfaces to develop sensor and image processing based applications, and connects its users via a centralized application as well as to social networks to facilitate communication and socialization. With the usage of this framework, an on-road anomaly detector is being developed and tested. The detector utilizes the sensors of a mobile device and is able to identify anomalies such as hard brake, pothole crossing, and speed bump crossing. Upon such detection, the video portion containing the anomaly is automatically extracted in order to enable further image processing analysis. The detection results are shared on a central portal application for online traffic condition monitoring.
An innovative and shared methodology for event reconstruction using images in forensic science.
Milliet, Quentin; Jendly, Manon; Delémont, Olivier
2015-09-01
This study presents an innovative methodology for forensic science image analysis for event reconstruction. The methodology is based on experiences from real cases. It provides real added value to technical guidelines such as standard operating procedures (SOPs) and enriches the community of practices at stake in this field. This bottom-up solution outlines the many facets of analysis and the complexity of the decision-making process. Additionally, the methodology provides a backbone for articulating more detailed and technical procedures and SOPs. It emerged from a grounded theory approach; data from individual and collective interviews with eight Swiss and nine European forensic image analysis experts were collected and interpreted in a continuous, circular and reflexive manner. Throughout the process of conducting interviews and panel discussions, similarities and discrepancies were discussed in detail to provide a comprehensive picture of practices and points of view and to ultimately formalise shared know-how. Our contribution sheds light on the complexity of the choices, actions and interactions along the path of data collection and analysis, enhancing both the researchers' and participants' reflexivity. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
#LancerHealth: Using Twitter and Instagram as a tool in a campus wide health promotion initiative
Santarossa, Sara; Woodruff, Sarah J.
2018-01-01
The present study aimed to explore using popular technology that people already have/use as a health promotion tool, in a campus wide social media health promotion initiative, entitled #LancerHealth. During a two-week period the university community was asked to share photos on Twitter and Instagram of What does being healthy on campus look like to you?, while tagging the image with #LancerHealth. All publically tagged media was collected using the Netlytic software and analysed. Text analysis (N=234 records, Twitter; N=141 records, Instagram) revealed that the majority of the conversation was positive and focused on health and the university. Social network analysis, based on five network properties, showed a small network with little interaction. Lastly, photo coding analysis (N=71 unique image) indicated that the majority of the shared images were of physical activity (52%) and on campus (80%). Further research into this area is warranted. Significance for public healthAs digital media continues to become a popular tool among both public health organizations and those in academia, it is important to understand how, why, and which platforms individuals are using in regards to their health. This campus wide, social media health promotion initiative found that people will use popular social networking sites like Twitter and Instagram to share their healthy behaviours. Online social networks, created through social networking sites, can play a role in social diffusion of public health information and health behaviours. In this study, however, social network analysis revealed that there needs to be influential and highly connected individuals sharing information to generate social diffusion. This study can help guide future public health research in the area of social media and its potential influence on health promotion. PMID:29780763
"Drinking Deeply with Delight": An Investigation of Transformative Images in Isaiah 1 and 65-66
ERIC Educational Resources Information Center
Radford, Peter
2016-01-01
This project examines the images used in the beginning and ending chapters of Isaiah. The purpose of this project is to trace the transformation of specific images from their introduction in Isaiah 1 to their re-interpretation in Isaiah 65-66. While this analysis uses the verbal parallels (shared vocabulary) as a starting point, the present…
Threshold multi-secret sharing scheme based on phase-shifting interferometry
NASA Astrophysics Data System (ADS)
Deng, Xiaopeng; Wen, Wei; Shi, Zhengang
2017-03-01
A threshold multi-secret sharing scheme is proposed based on phase-shifting interferometry. The K secret images to be shared are firstly encoded by using Fourier transformation, respectively. Then, these encoded images are shared into many shadow images based on recording principle of the phase-shifting interferometry. In the recovering stage, the secret images can be restored by combining any 2 K + 1 or more shadow images, while any 2 K or fewer shadow images cannot obtain any information about the secret images. As a result, a (2 K + 1 , N) threshold multi-secret sharing scheme can be implemented. Simulation results are presented to demonstrate the feasibility of the proposed method.
ClearedLeavesDB: an online database of cleared plant leaf images
2014-01-01
Background Leaf vein networks are critical to both the structure and function of leaves. A growing body of recent work has linked leaf vein network structure to the physiology, ecology and evolution of land plants. In the process, multiple institutions and individual researchers have assembled collections of cleared leaf specimens in which vascular bundles (veins) are rendered visible. In an effort to facilitate analysis and digitally preserve these specimens, high-resolution images are usually created, either of entire leaves or of magnified leaf subsections. In a few cases, collections of digital images of cleared leaves are available for use online. However, these collections do not share a common platform nor is there a means to digitally archive cleared leaf images held by individual researchers (in addition to those held by institutions). Hence, there is a growing need for a digital archive that enables online viewing, sharing and disseminating of cleared leaf image collections held by both institutions and individual researchers. Description The Cleared Leaf Image Database (ClearedLeavesDB), is an online web-based resource for a community of researchers to contribute, access and share cleared leaf images. ClearedLeavesDB leverages resources of large-scale, curated collections while enabling the aggregation of small-scale collections within the same online platform. ClearedLeavesDB is built on Drupal, an open source content management platform. It allows plant biologists to store leaf images online with corresponding meta-data, share image collections with a user community and discuss images and collections via a common forum. We provide tools to upload processed images and results to the database via a web services client application that can be downloaded from the database. Conclusions We developed ClearedLeavesDB, a database focusing on cleared leaf images that combines interactions between users and data via an intuitive web interface. The web interface allows storage of large collections and integrates with leaf image analysis applications via an open application programming interface (API). The open API allows uploading of processed images and other trait data to the database, further enabling distribution and documentation of analyzed data within the community. The initial database is seeded with nearly 19,000 cleared leaf images representing over 40 GB of image data. Extensible storage and growth of the database is ensured by using the data storage resources of the iPlant Discovery Environment. ClearedLeavesDB can be accessed at http://clearedleavesdb.org. PMID:24678985
ClearedLeavesDB: an online database of cleared plant leaf images.
Das, Abhiram; Bucksch, Alexander; Price, Charles A; Weitz, Joshua S
2014-03-28
Leaf vein networks are critical to both the structure and function of leaves. A growing body of recent work has linked leaf vein network structure to the physiology, ecology and evolution of land plants. In the process, multiple institutions and individual researchers have assembled collections of cleared leaf specimens in which vascular bundles (veins) are rendered visible. In an effort to facilitate analysis and digitally preserve these specimens, high-resolution images are usually created, either of entire leaves or of magnified leaf subsections. In a few cases, collections of digital images of cleared leaves are available for use online. However, these collections do not share a common platform nor is there a means to digitally archive cleared leaf images held by individual researchers (in addition to those held by institutions). Hence, there is a growing need for a digital archive that enables online viewing, sharing and disseminating of cleared leaf image collections held by both institutions and individual researchers. The Cleared Leaf Image Database (ClearedLeavesDB), is an online web-based resource for a community of researchers to contribute, access and share cleared leaf images. ClearedLeavesDB leverages resources of large-scale, curated collections while enabling the aggregation of small-scale collections within the same online platform. ClearedLeavesDB is built on Drupal, an open source content management platform. It allows plant biologists to store leaf images online with corresponding meta-data, share image collections with a user community and discuss images and collections via a common forum. We provide tools to upload processed images and results to the database via a web services client application that can be downloaded from the database. We developed ClearedLeavesDB, a database focusing on cleared leaf images that combines interactions between users and data via an intuitive web interface. The web interface allows storage of large collections and integrates with leaf image analysis applications via an open application programming interface (API). The open API allows uploading of processed images and other trait data to the database, further enabling distribution and documentation of analyzed data within the community. The initial database is seeded with nearly 19,000 cleared leaf images representing over 40 GB of image data. Extensible storage and growth of the database is ensured by using the data storage resources of the iPlant Discovery Environment. ClearedLeavesDB can be accessed at http://clearedleavesdb.org.
Rahman, Mahabubur; Watabe, Hiroshi
2018-05-01
Molecular imaging serves as an important tool for researchers and clinicians to visualize and investigate complex biochemical phenomena using specialized instruments; these instruments are either used individually or in combination with targeted imaging agents to obtain images related to specific diseases with high sensitivity, specificity, and signal-to-noise ratios. However, molecular imaging, which is a multidisciplinary research field, faces several challenges, including the integration of imaging informatics with bioinformatics and medical informatics, requirement of reliable and robust image analysis algorithms, effective quality control of imaging facilities, and those related to individualized disease mapping, data sharing, software architecture, and knowledge management. As a cost-effective and open-source approach to address these challenges related to molecular imaging, we develop a flexible, transparent, and secure infrastructure, named MIRA, which stands for Molecular Imaging Repository and Analysis, primarily using the Python programming language, and a MySQL relational database system deployed on a Linux server. MIRA is designed with a centralized image archiving infrastructure and information database so that a multicenter collaborative informatics platform can be built. The capability of dealing with metadata, image file format normalization, and storing and viewing different types of documents and multimedia files make MIRA considerably flexible. With features like logging, auditing, commenting, sharing, and searching, MIRA is useful as an Electronic Laboratory Notebook for effective knowledge management. In addition, the centralized approach for MIRA facilitates on-the-fly access to all its features remotely through any web browser. Furthermore, the open-source approach provides the opportunity for sustainable continued development. MIRA offers an infrastructure that can be used as cross-boundary collaborative MI research platform for the rapid achievement in cancer diagnosis and therapeutics. Copyright © 2018 Elsevier Ltd. All rights reserved.
Neuroimaging Data Sharing on the Neuroinformatics Database Platform
Book, Gregory A; Stevens, Michael; Assaf, Michal; Glahn, David; Pearlson, Godfrey D
2015-01-01
We describe the Neuroinformatics Database (NiDB), an open-source database platform for archiving, analysis, and sharing of neuroimaging data. Data from the multi-site projects Autism Brain Imaging Data Exchange (ABIDE), Bipolar-Schizophrenia Network on Intermediate Phenotypes parts one and two (B-SNIP1, B-SNIP2), and Monetary Incentive Delay task (MID) are available for download from the public instance of NiDB, with more projects sharing data as it becomes available. As demonstrated by making several large datasets available, NiDB is an extensible platform appropriately suited to archive and distribute shared neuroimaging data. PMID:25888923
XDS-I Gateway Development for HIE Connectivity with Legacy PACS at Gil Hospital.
Simalango, Mikael Fernandus; Kim, Youngchul; Seo, Young Tae; Choi, Young Hwan; Cho, Yong Kyun
2013-12-01
The ability to support healthcare document sharing is imperative in a health information exchange (HIE). Sharing imaging documents or images, however, can be challenging, especially when they are stored in a picture archiving and communication system (PACS) archive that does not support document sharing via standard HIE protocols. This research proposes a standard-compliant imaging gateway that enables connectivity between a legacy PACS and the entire HIE. Investigation of the PACS solutions used at Gil Hospital was conducted. An imaging gateway application was then developed using a Java technology stack. Imaging document sharing capability enabled by the gateway was tested by integrating it into Gil Hospital's order communication system and its HIE infrastructure. The gateway can acquire radiology images from a PACS storage system, provide and register the images to Gil Hospital's HIE for document sharing purposes, and make the images retrievable by a cross-enterprise document sharing document viewer. Development of an imaging gateway that mediates communication between a PACS and an HIE can be considered a viable option when the PACS does not support the standard protocol for cross-enterprise document sharing for imaging. Furthermore, the availability of common HIE standards expedites the development and integration of the imaging gateway with an HIE.
XDS-I Gateway Development for HIE Connectivity with Legacy PACS at Gil Hospital
Simalango, Mikael Fernandus; Kim, Youngchul; Seo, Young Tae; Cho, Yong Kyun
2013-01-01
Objectives The ability to support healthcare document sharing is imperative in a health information exchange (HIE). Sharing imaging documents or images, however, can be challenging, especially when they are stored in a picture archiving and communication system (PACS) archive that does not support document sharing via standard HIE protocols. This research proposes a standard-compliant imaging gateway that enables connectivity between a legacy PACS and the entire HIE. Methods Investigation of the PACS solutions used at Gil Hospital was conducted. An imaging gateway application was then developed using a Java technology stack. Imaging document sharing capability enabled by the gateway was tested by integrating it into Gil Hospital's order communication system and its HIE infrastructure. Results The gateway can acquire radiology images from a PACS storage system, provide and register the images to Gil Hospital's HIE for document sharing purposes, and make the images retrievable by a cross-enterprise document sharing document viewer. Conclusions Development of an imaging gateway that mediates communication between a PACS and an HIE can be considered a viable option when the PACS does not support the standard protocol for cross-enterprise document sharing for imaging. Furthermore, the availability of common HIE standards expedites the development and integration of the imaging gateway with an HIE. PMID:24523994
Anderson, Beth M.; Stevens, Michael C.; Glahn, David C.; Assaf, Michal; Pearlson, Godfrey D.
2013-01-01
We present a modular, high performance, open-source database system that incorporates popular neuroimaging database features with novel peer-to-peer sharing, and a simple installation. An increasing number of imaging centers have created a massive amount of neuroimaging data since fMRI became popular more than 20 years ago, with much of that data unshared. The Neuroinformatics Database (NiDB) provides a stable platform to store and manipulate neuroimaging data and addresses several of the impediments to data sharing presented by the INCF Task Force on Neuroimaging Datasharing, including 1) motivation to share data, 2) technical issues, and 3) standards development. NiDB solves these problems by 1) minimizing PHI use, providing a cost effective simple locally stored platform, 2) storing and associating all data (including genome) with a subject and creating a peer-to-peer sharing model, and 3) defining a sample, normalized definition of a data storage structure that is used in NiDB. NiDB not only simplifies the local storage and analysis of neuroimaging data, but also enables simple sharing of raw data and analysis methods, which may encourage further sharing. PMID:23912507
The AstroVR Collaboratory, an On-line Multi-User Environment for Research in Astrophysics
NASA Astrophysics Data System (ADS)
van Buren, D.; Curtis, P.; Nichols, D. A.; Brundage, M.
We describe our experiment with an on-line collaborative environment where users share the execution of programs and communicate via audio, video, and typed text. Collaborative environments represent the next step in computer-mediated conferencing, combining powerful compute engines, data persistence, shared applications, and teleconferencing tools. As proof of concept, we have implemented a shared image analysis tool, allowing geographically distinct users to analyze FITS images together. We anticipate that \\htmllink{AstroVR}{http://astrovr.ipac.caltech.edu:8888} and similar systems will become an important part of collaborative work in the next decade, including with applications in remote observing, spacecraft operations, on-line meetings, as well as and day-to-day research activities. The technology is generic and promises to find uses in business, medicine, government, and education.
Analysis and Recognition of Curve Type as The Basis of Object Recognition in Image
NASA Astrophysics Data System (ADS)
Nugraha, Nurma; Madenda, Sarifuddin; Indarti, Dina; Dewi Agushinta, R.; Ernastuti
2016-06-01
An object in an image when analyzed further will show the characteristics that distinguish one object with another object in an image. Characteristics that are used in object recognition in an image can be a color, shape, pattern, texture and spatial information that can be used to represent objects in the digital image. The method has recently been developed for image feature extraction on objects that share characteristics curve analysis (simple curve) and use the search feature of chain code object. This study will develop an algorithm analysis and the recognition of the type of curve as the basis for object recognition in images, with proposing addition of complex curve characteristics with maximum four branches that will be used for the process of object recognition in images. Definition of complex curve is the curve that has a point of intersection. By using some of the image of the edge detection, the algorithm was able to do the analysis and recognition of complex curve shape well.
Nonlinear secret image sharing scheme.
Shin, Sang-Ho; Lee, Gil-Je; Yoo, Kee-Young
2014-01-01
Over the past decade, most of secret image sharing schemes have been proposed by using Shamir's technique. It is based on a linear combination polynomial arithmetic. Although Shamir's technique based secret image sharing schemes are efficient and scalable for various environments, there exists a security threat such as Tompa-Woll attack. Renvall and Ding proposed a new secret sharing technique based on nonlinear combination polynomial arithmetic in order to solve this threat. It is hard to apply to the secret image sharing. In this paper, we propose a (t, n)-threshold nonlinear secret image sharing scheme with steganography concept. In order to achieve a suitable and secure secret image sharing scheme, we adapt a modified LSB embedding technique with XOR Boolean algebra operation, define a new variable m, and change a range of prime p in sharing procedure. In order to evaluate efficiency and security of proposed scheme, we use the embedding capacity and PSNR. As a result of it, average value of PSNR and embedding capacity are 44.78 (dB) and 1.74t⌈log2 m⌉ bit-per-pixel (bpp), respectively.
Nonlinear Secret Image Sharing Scheme
Shin, Sang-Ho; Yoo, Kee-Young
2014-01-01
Over the past decade, most of secret image sharing schemes have been proposed by using Shamir's technique. It is based on a linear combination polynomial arithmetic. Although Shamir's technique based secret image sharing schemes are efficient and scalable for various environments, there exists a security threat such as Tompa-Woll attack. Renvall and Ding proposed a new secret sharing technique based on nonlinear combination polynomial arithmetic in order to solve this threat. It is hard to apply to the secret image sharing. In this paper, we propose a (t, n)-threshold nonlinear secret image sharing scheme with steganography concept. In order to achieve a suitable and secure secret image sharing scheme, we adapt a modified LSB embedding technique with XOR Boolean algebra operation, define a new variable m, and change a range of prime p in sharing procedure. In order to evaluate efficiency and security of proposed scheme, we use the embedding capacity and PSNR. As a result of it, average value of PSNR and embedding capacity are 44.78 (dB) and 1.74t⌈log2m⌉ bit-per-pixel (bpp), respectively. PMID:25140334
Big heart data: advancing health informatics through data sharing in cardiovascular imaging.
Suinesiaputra, Avan; Medrano-Gracia, Pau; Cowan, Brett R; Young, Alistair A
2015-07-01
The burden of heart disease is rapidly worsening due to the increasing prevalence of obesity and diabetes. Data sharing and open database resources for heart health informatics are important for advancing our understanding of cardiovascular function, disease progression and therapeutics. Data sharing enables valuable information, often obtained at considerable expense and effort, to be reused beyond the specific objectives of the original study. Many government funding agencies and journal publishers are requiring data reuse, and are providing mechanisms for data curation and archival. Tools and infrastructure are available to archive anonymous data from a wide range of studies, from descriptive epidemiological data to gigabytes of imaging data. Meta-analyses can be performed to combine raw data from disparate studies to obtain unique comparisons or to enhance statistical power. Open benchmark datasets are invaluable for validating data analysis algorithms and objectively comparing results. This review provides a rationale for increased data sharing and surveys recent progress in the cardiovascular domain. We also highlight the potential of recent large cardiovascular epidemiological studies enabling collaborative efforts to facilitate data sharing, algorithms benchmarking, disease modeling and statistical atlases.
Patient-controlled sharing of medical imaging data across unaffiliated healthcare organizations
Ahn, David K; Unde, Bhagyashree; Gage, H Donald; Carr, J Jeffrey
2013-01-01
Background Current image sharing is carried out by manual transportation of CDs by patients or organization-coordinated sharing networks. The former places a significant burden on patients and providers. The latter faces challenges to patient privacy. Objective To allow healthcare providers efficient access to medical imaging data acquired at other unaffiliated healthcare facilities while ensuring strong protection of patient privacy and minimizing burden on patients, providers, and the information technology infrastructure. Methods An image sharing framework is described that involves patients as an integral part of, and with full control of, the image sharing process. Central to this framework is the Patient Controlled Access-key REgistry (PCARE) which manages the access keys issued by image source facilities. When digitally signed by patients, the access keys are used by any requesting facility to retrieve the associated imaging data from the source facility. A centralized patient portal, called a PCARE patient control portal, allows patients to manage all the access keys in PCARE. Results A prototype of the PCARE framework has been developed by extending open-source technology. The results for feasibility, performance, and user assessments are encouraging and demonstrate the benefits of patient-controlled image sharing. Discussion The PCARE framework is effective in many important clinical cases of image sharing and can be used to integrate organization-coordinated sharing networks. The same framework can also be used to realize a longitudinal virtual electronic health record. Conclusion The PCARE framework allows prior imaging data to be shared among unaffiliated healthcare facilities while protecting patient privacy with minimal burden on patients, providers, and infrastructure. A prototype has been implemented to demonstrate the feasibility and benefits of this approach. PMID:22886546
Ontology-based, Tissue MicroArray oriented, image centered tissue bank
Viti, Federica; Merelli, Ivan; Caprera, Andrea; Lazzari, Barbara; Stella, Alessandra; Milanesi, Luciano
2008-01-01
Background Tissue MicroArray technique is becoming increasingly important in pathology for the validation of experimental data from transcriptomic analysis. This approach produces many images which need to be properly managed, if possible with an infrastructure able to support tissue sharing between institutes. Moreover, the available frameworks oriented to Tissue MicroArray provide good storage for clinical patient, sample treatment and block construction information, but their utility is limited by the lack of data integration with biomolecular information. Results In this work we propose a Tissue MicroArray web oriented system to support researchers in managing bio-samples and, through the use of ontologies, enables tissue sharing aimed at the design of Tissue MicroArray experiments and results evaluation. Indeed, our system provides ontological description both for pre-analysis tissue images and for post-process analysis image results, which is crucial for information exchange. Moreover, working on well-defined terms it is then possible to query web resources for literature articles to integrate both pathology and bioinformatics data. Conclusions Using this system, users associate an ontology-based description to each image uploaded into the database and also integrate results with the ontological description of biosequences identified in every tissue. Moreover, it is possible to integrate the ontological description provided by the user with a full compliant gene ontology definition, enabling statistical studies about correlation between the analyzed pathology and the most commonly related biological processes. PMID:18460177
Yu, Kai; Shi, Fei; Gao, Enting; Zhu, Weifang; Chen, Haoyu; Chen, Xinjian
2018-01-01
Optic nerve head (ONH) is a crucial region for glaucoma detection and tracking based on spectral domain optical coherence tomography (SD-OCT) images. In this region, the existence of a “hole” structure makes retinal layer segmentation and analysis very challenging. To improve retinal layer segmentation, we propose a 3D method for ONH centered SD-OCT image segmentation, which is based on a modified graph search algorithm with a shared-hole and locally adaptive constraints. With the proposed method, both the optic disc boundary and nine retinal surfaces can be accurately segmented in SD-OCT images. An overall mean unsigned border positioning error of 7.27 ± 5.40 µm was achieved for layer segmentation, and a mean Dice coefficient of 0.925 ± 0.03 was achieved for optic disc region detection. PMID:29541497
Image Reconstruction is a New Frontier of Machine Learning.
Wang, Ge; Ye, Jong Chu; Mueller, Klaus; Fessler, Jeffrey A
2018-06-01
Over past several years, machine learning, or more generally artificial intelligence, has generated overwhelming research interest and attracted unprecedented public attention. As tomographic imaging researchers, we share the excitement from our imaging perspective [item 1) in the Appendix], and organized this special issue dedicated to the theme of "Machine learning for image reconstruction." This special issue is a sister issue of the special issue published in May 2016 of this journal with the theme "Deep learning in medical imaging" [item 2) in the Appendix]. While the previous special issue targeted medical image processing/analysis, this special issue focuses on data-driven tomographic reconstruction. These two special issues are highly complementary, since image reconstruction and image analysis are two of the main pillars for medical imaging. Together we cover the whole workflow of medical imaging: from tomographic raw data/features to reconstructed images and then extracted diagnostic features/readings.
MilxXplore: a web-based system to explore large imaging datasets.
Bourgeat, P; Dore, V; Villemagne, V L; Rowe, C C; Salvado, O; Fripp, J
2013-01-01
As large-scale medical imaging studies are becoming more common, there is an increasing reliance on automated software to extract quantitative information from these images. As the size of the cohorts keeps increasing with large studies, there is a also a need for tools that allow results from automated image processing and analysis to be presented in a way that enables fast and efficient quality checking, tagging and reporting on cases in which automatic processing failed or was problematic. MilxXplore is an open source visualization platform, which provides an interface to navigate and explore imaging data in a web browser, giving the end user the opportunity to perform quality control and reporting in a user friendly, collaborative and efficient way. Compared to existing software solutions that often provide an overview of the results at the subject's level, MilxXplore pools the results of individual subjects and time points together, allowing easy and efficient navigation and browsing through the different acquisitions of a subject over time, and comparing the results against the rest of the population. MilxXplore is fast, flexible and allows remote quality checks of processed imaging data, facilitating data sharing and collaboration across multiple locations, and can be easily integrated into a cloud computing pipeline. With the growing trend of open data and open science, such a tool will become increasingly important to share and publish results of imaging analysis.
Metadata management for high content screening in OMERO
Li, Simon; Besson, Sébastien; Blackburn, Colin; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gillen, Kenneth; Leigh, Roger; Lindner, Dominik; Linkert, Melissa; Moore, William J.; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Allan, Chris; Burel, Jean-Marie; Moore, Josh; Swedlow, Jason R.
2016-01-01
High content screening (HCS) experiments create a classic data management challenge—multiple, large sets of heterogeneous structured and unstructured data, that must be integrated and linked to produce a set of “final” results. These different data include images, reagents, protocols, analytic output, and phenotypes, all of which must be stored, linked and made accessible for users, scientists, collaborators and where appropriate the wider community. The OME Consortium has built several open source tools for managing, linking and sharing these different types of data. The OME Data Model is a metadata specification that supports the image data and metadata recorded in HCS experiments. Bio-Formats is a Java library that reads recorded image data and metadata and includes support for several HCS screening systems. OMERO is an enterprise data management application that integrates image data, experimental and analytic metadata and makes them accessible for visualization, mining, sharing and downstream analysis. We discuss how Bio-Formats and OMERO handle these different data types, and how they can be used to integrate, link and share HCS experiments in facilities and public data repositories. OME specifications and software are open source and are available at https://www.openmicroscopy.org. PMID:26476368
Metadata management for high content screening in OMERO.
Li, Simon; Besson, Sébastien; Blackburn, Colin; Carroll, Mark; Ferguson, Richard K; Flynn, Helen; Gillen, Kenneth; Leigh, Roger; Lindner, Dominik; Linkert, Melissa; Moore, William J; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Allan, Chris; Burel, Jean-Marie; Moore, Josh; Swedlow, Jason R
2016-03-01
High content screening (HCS) experiments create a classic data management challenge-multiple, large sets of heterogeneous structured and unstructured data, that must be integrated and linked to produce a set of "final" results. These different data include images, reagents, protocols, analytic output, and phenotypes, all of which must be stored, linked and made accessible for users, scientists, collaborators and where appropriate the wider community. The OME Consortium has built several open source tools for managing, linking and sharing these different types of data. The OME Data Model is a metadata specification that supports the image data and metadata recorded in HCS experiments. Bio-Formats is a Java library that reads recorded image data and metadata and includes support for several HCS screening systems. OMERO is an enterprise data management application that integrates image data, experimental and analytic metadata and makes them accessible for visualization, mining, sharing and downstream analysis. We discuss how Bio-Formats and OMERO handle these different data types, and how they can be used to integrate, link and share HCS experiments in facilities and public data repositories. OME specifications and software are open source and are available at https://www.openmicroscopy.org. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Rostral and caudal prefrontal contribution to creativity: a meta-analysis of functional imaging data
Gonen-Yaacovi, Gil; de Souza, Leonardo Cruz; Levy, Richard; Urbanski, Marika; Josse, Goulven; Volle, Emmanuelle
2013-01-01
Creativity is of central importance for human civilization, yet its neurocognitive bases are poorly understood. The aim of the present study was to integrate existing functional imaging data by using the meta-analysis approach. We reviewed 34 functional imaging studies that reported activation foci during tasks assumed to engage creative thinking in healthy adults. A coordinate-based meta-analysis using Activation Likelihood Estimation (ALE) first showed a set of predominantly left-hemispheric regions shared by the various creativity tasks examined. These regions included the caudal lateral prefrontal cortex (PFC), the medial and lateral rostral PFC, and the inferior parietal and posterior temporal cortices. Further analyses showed that tasks involving the combination of remote information (combination tasks) activated more anterior areas of the lateral PFC than tasks involving the free generation of unusual responses (unusual generation tasks), although both types of tasks shared caudal prefrontal areas. In addition, verbal and non-verbal tasks involved the same regions in the left caudal prefrontal, temporal, and parietal areas, but also distinct domain-oriented areas. Taken together, these findings suggest that several frontal and parieto-temporal regions may support cognitive processes shared by diverse creativity tasks, and that some regions may be specialized for distinct types of processes. In particular, the lateral PFC appeared to be organized along a rostro-caudal axis, with rostral regions involved in combining ideas creatively and more posterior regions involved in freely generating novel ideas. PMID:23966927
NASA Astrophysics Data System (ADS)
Akil, Mohamed
2017-05-01
The real-time processing is getting more and more important in many image processing applications. Image segmentation is one of the most fundamental tasks image analysis. As a consequence, many different approaches for image segmentation have been proposed. The watershed transform is a well-known image segmentation tool. The watershed transform is a very data intensive task. To achieve acceleration and obtain real-time processing of watershed algorithms, parallel architectures and programming models for multicore computing have been developed. This paper focuses on the survey of the approaches for parallel implementation of sequential watershed algorithms on multicore general purpose CPUs: homogeneous multicore processor with shared memory. To achieve an efficient parallel implementation, it's necessary to explore different strategies (parallelization/distribution/distributed scheduling) combined with different acceleration and optimization techniques to enhance parallelism. In this paper, we give a comparison of various parallelization of sequential watershed algorithms on shared memory multicore architecture. We analyze the performance measurements of each parallel implementation and the impact of the different sources of overhead on the performance of the parallel implementations. In this comparison study, we also discuss the advantages and disadvantages of the parallel programming models. Thus, we compare the OpenMP (an application programming interface for multi-Processing) with Ptheads (POSIX Threads) to illustrate the impact of each parallel programming model on the performance of the parallel implementations.
Threshold secret sharing scheme based on phase-shifting interferometry.
Deng, Xiaopeng; Shi, Zhengang; Wen, Wei
2016-11-01
We propose a new method for secret image sharing with the (3,N) threshold scheme based on phase-shifting interferometry. The secret image, which is multiplied with an encryption key in advance, is first encrypted by using Fourier transformation. Then, the encoded image is shared into N shadow images based on the recording principle of phase-shifting interferometry. Based on the reconstruction principle of phase-shifting interferometry, any three or more shadow images can retrieve the secret image, while any two or fewer shadow images cannot obtain any information of the secret image. Thus, a (3,N) threshold secret sharing scheme can be implemented. Compared with our previously reported method, the algorithm of this paper is suited for not only a binary image but also a gray-scale image. Moreover, the proposed algorithm can obtain a larger threshold value t. Simulation results are presented to demonstrate the feasibility of the proposed method.
NASA Technical Reports Server (NTRS)
Hofman, L. B.; Erickson, W. K.; Donovan, W. E.
1984-01-01
Image Display and Analysis Systems (MIDAS) developed at NASA/Ames for the analysis of Landsat MSS images is described. The MIDAS computer power and memory, graphics, resource-sharing, expansion and upgrade, environment and maintenance, and software/user-interface requirements are outlined; the implementation hardware (including 32-bit microprocessor, 512K error-correcting RAM, 70 or 140-Mbyte formatted disk drive, 512 x 512 x 24 color frame buffer, and local-area-network transceiver) and applications software (ELAS, CIE, and P-EDITOR) are characterized; and implementation problems, performance data, and costs are examined. Planned improvements in MIDAS hardware and design goals and areas of exploration for MIDAS software are discussed.
Fischer, Curt R.; Ruebel, Oliver; Bowen, Benjamin P.
2015-09-11
Mass spectrometry imaging (MSI) is used in an increasing number of biological applications. Typical MSI datasets contain unique, high-resolution mass spectra from tens of thousands of spatial locations, resulting in raw data sizes of tens of gigabytes per sample. In this paper, we review technical progress that is enabling new biological applications and that is driving an increase in the complexity and size of MSI data. Handling such data often requires specialized computational infrastructure, software, and expertise. OpenMSI, our recently described platform, makes it easy to explore and share MSI datasets via the web – even when larger than 50more » GB. Here we describe the integration of OpenMSI with IPython notebooks for transparent, sharable, and replicable MSI research. An advantage of this approach is that users do not have to share raw data along with analyses; instead, data is retrieved via OpenMSI's web API. The IPython notebook interface provides a low-barrier entry point for data manipulation that is accessible for scientists without extensive computational training. Via these notebooks, analyses can be easily shared without requiring any data movement. We provide example notebooks for several common MSI analysis types including data normalization, plotting, clustering, and classification, and image registration.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fischer, CR; Ruebel, O; Bowen, BP
Mass spectrometry imaging (MSI) is used in an increasing number of biological applications. Typical MSI datasets contain unique, high-resolution mass spectra from tens of thousands of spatial locations, resulting in raw data sizes of tens of gigabytes per sample. In this paper, we review technical progress that is enabling new biological applications and that is driving an increase in the complexity and size of MSI data. Handling such data often requires specialized computational infrastructure, software, and expertise. OpenMSI, our recently described platform, makes it easy to explore and share MSI datasets via the web - even when larger than 50 GB.more » Here we describe the integration of OpenMSI with IPython notebooks for transparent, sharable, and replicable MSI research. An advantage of this approach is that users do not have to share raw data along with analyses; instead, data is retrieved via OpenMSI's web API. The IPython notebook interface provides a low-barrier entry point for data manipulation that is accessible for scientists without extensive computational training. Via these notebooks, analyses can be easily shared without requiring any data movement. We provide example notebooks for several common MSI analysis types including data normalization, plotting, clustering, and classification, and image registration.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fischer, Curt R.; Ruebel, Oliver; Bowen, Benjamin P.
Mass spectrometry imaging (MSI) is used in an increasing number of biological applications. Typical MSI datasets contain unique, high-resolution mass spectra from tens of thousands of spatial locations, resulting in raw data sizes of tens of gigabytes per sample. In this paper, we review technical progress that is enabling new biological applications and that is driving an increase in the complexity and size of MSI data. Handling such data often requires specialized computational infrastructure, software, and expertise. OpenMSI, our recently described platform, makes it easy to explore and share MSI datasets via the web – even when larger than 50more » GB. Here we describe the integration of OpenMSI with IPython notebooks for transparent, sharable, and replicable MSI research. An advantage of this approach is that users do not have to share raw data along with analyses; instead, data is retrieved via OpenMSI's web API. The IPython notebook interface provides a low-barrier entry point for data manipulation that is accessible for scientists without extensive computational training. Via these notebooks, analyses can be easily shared without requiring any data movement. We provide example notebooks for several common MSI analysis types including data normalization, plotting, clustering, and classification, and image registration.« less
Towards Social Radiology as an Information Infrastructure: Reconciling the Local With the Global
2014-01-01
The current widespread use of medical images and imaging procedures in clinical practice and patient diagnosis has brought about an increase in the demand for sharing medical imaging studies among health professionals in an easy and effective manner. This article reveals the existence of a polarization between the local and global demands for radiology practice. While there are no major barriers for sharing such studies, when access is made from a (local) picture archive and communication system (PACS) within the domain of a healthcare organization, there are a number of impediments for sharing studies among health professionals on a global scale. Social radiology as an information infrastructure involves the notion of a shared infrastructure as a public good, affording a social space where people, organizations and technical components may spontaneously form associations in order to share clinical information linked to patient care and radiology practice. This article shows however, that such polarization establishes a tension between local and global demands, which hinders the emergence of social radiology as an information infrastructure. Based on an analysis of the social space for radiology practice, the present article has observed that this tension persists due to the inertia of a locally installed base in radiology departments, for which common teleradiology models are not truly capable of reorganizing as a global social space for radiology practice. Reconciling the local with the global signifies integrating PACS and teleradiology into an evolving, secure, heterogeneous, shared, open information infrastructure where the conceptual boundaries between (local) PACS and (global) teleradiology are transparent, signaling the emergence of social radiology as an information infrastructure. PMID:25600710
Zhang, Jianguo; Zhang, Kai; Yang, Yuanyuan; Sun, Jianyong; Ling, Tonghui; Wang, Mingqing; Bak, Peter
2015-01-01
Abstract. IHE XDS-I profile proposes an architecture model for cross-enterprise medical image sharing, but there are only a few clinical implementations reported. Here, we investigate three pilot studies based on the IHE XDS-I profile to see whether we can use this architecture as a foundation for image sharing solutions in a variety of health-care settings. The first pilot study was image sharing for cross-enterprise health care with federated integration, which was implemented in Huadong Hospital and Shanghai Sixth People’s Hospital within the Shanghai Shen-Kang Hospital Management Center; the second pilot study was XDS-I–based patient-controlled image sharing solution, which was implemented by the Radiological Society of North America (RSNA) team in the USA; and the third pilot study was collaborative imaging diagnosis with electronic health-care record integration in regional health care, which was implemented in two districts in Shanghai. In order to support these pilot studies, we designed and developed new image access methods, components, and data models such as RAD-69/WADO hybrid image retrieval, RSNA clearinghouse, and extension of metadata definitions in both the submission set and the cross-enterprise document sharing (XDS) registry. We identified several key issues that impact the implementation of XDS-I in practical applications, and conclude that the IHE XDS-I profile is a theoretically good architecture and a useful foundation for medical image sharing solutions across multiple regional health-care providers. PMID:26835497
Zhang, Jianguo; Zhang, Kai; Yang, Yuanyuan; Sun, Jianyong; Ling, Tonghui; Wang, Mingqing; Bak, Peter
2015-10-01
IHE XDS-I profile proposes an architecture model for cross-enterprise medical image sharing, but there are only a few clinical implementations reported. Here, we investigate three pilot studies based on the IHE XDS-I profile to see whether we can use this architecture as a foundation for image sharing solutions in a variety of health-care settings. The first pilot study was image sharing for cross-enterprise health care with federated integration, which was implemented in Huadong Hospital and Shanghai Sixth People's Hospital within the Shanghai Shen-Kang Hospital Management Center; the second pilot study was XDS-I-based patient-controlled image sharing solution, which was implemented by the Radiological Society of North America (RSNA) team in the USA; and the third pilot study was collaborative imaging diagnosis with electronic health-care record integration in regional health care, which was implemented in two districts in Shanghai. In order to support these pilot studies, we designed and developed new image access methods, components, and data models such as RAD-69/WADO hybrid image retrieval, RSNA clearinghouse, and extension of metadata definitions in both the submission set and the cross-enterprise document sharing (XDS) registry. We identified several key issues that impact the implementation of XDS-I in practical applications, and conclude that the IHE XDS-I profile is a theoretically good architecture and a useful foundation for medical image sharing solutions across multiple regional health-care providers.
NASA Astrophysics Data System (ADS)
Prodanovic, M.; Esteva, M.; Hanlon, M.; Nanda, G.; Agarwal, P.
2015-12-01
Recent advances in imaging have provided a wealth of 3D datasets that reveal pore space microstructure (nm to cm length scale) and allow investigation of nonlinear flow and mechanical phenomena from first principles using numerical approaches. This framework has popularly been called "digital rock physics". Researchers, however, have trouble storing and sharing the datasets both due to their size and the lack of standardized image types and associated metadata for volumetric datasets. This impedes scientific cross-validation of the numerical approaches that characterize large scale porous media properties, as well as development of multiscale approaches required for correct upscaling. A single research group typically specializes in an imaging modality and/or related modeling on a single length scale, and lack of data-sharing infrastructure makes it difficult to integrate different length scales. We developed a sustainable, open and easy-to-use repository called the Digital Rocks Portal, that (1) organizes images and related experimental measurements of different porous materials, (2) improves access to them for a wider community of geosciences or engineering researchers not necessarily trained in computer science or data analysis. Once widely accepter, the repository will jumpstart productivity and enable scientific inquiry and engineering decisions founded on a data-driven basis. This is the first repository of its kind. We show initial results on incorporating essential software tools and pipelines that make it easier for researchers to store and reuse data, and for educators to quickly visualize and illustrate concepts to a wide audience. For data sustainability and continuous access, the portal is implemented within the reliable, 24/7 maintained High Performance Computing Infrastructure supported by the Texas Advanced Computing Center (TACC) at the University of Texas at Austin. Long-term storage is provided through the University of Texas System Research Cyber-infrastructure initiative.
Gutman, David A; Cobb, Jake; Somanna, Dhananjaya; Park, Yuna; Wang, Fusheng; Kurc, Tahsin; Saltz, Joel H; Brat, Daniel J; Cooper, Lee A D
2013-01-01
Background The integration and visualization of multimodal datasets is a common challenge in biomedical informatics. Several recent studies of The Cancer Genome Atlas (TCGA) data have illustrated important relationships between morphology observed in whole-slide images, outcome, and genetic events. The pairing of genomics and rich clinical descriptions with whole-slide imaging provided by TCGA presents a unique opportunity to perform these correlative studies. However, better tools are needed to integrate the vast and disparate data types. Objective To build an integrated web-based platform supporting whole-slide pathology image visualization and data integration. Materials and methods All images and genomic data were directly obtained from the TCGA and National Cancer Institute (NCI) websites. Results The Cancer Digital Slide Archive (CDSA) produced is accessible to the public (http://cancer.digitalslidearchive.net) and currently hosts more than 20 000 whole-slide images from 22 cancer types. Discussion The capabilities of CDSA are demonstrated using TCGA datasets to integrate pathology imaging with associated clinical, genomic and MRI measurements in glioblastomas and can be extended to other tumor types. CDSA also allows URL-based sharing of whole-slide images, and has preliminary support for directly sharing regions of interest and other annotations. Images can also be selected on the basis of other metadata, such as mutational profile, patient age, and other relevant characteristics. Conclusions With the increasing availability of whole-slide scanners, analysis of digitized pathology images will become increasingly important in linking morphologic observations with genomic and clinical endpoints. PMID:23893318
System design and implementation of digital-image processing using computational grids
NASA Astrophysics Data System (ADS)
Shen, Zhanfeng; Luo, Jiancheng; Zhou, Chenghu; Huang, Guangyu; Ma, Weifeng; Ming, Dongping
2005-06-01
As a special type of digital image, remotely sensed images are playing increasingly important roles in our daily lives. Because of the enormous amounts of data involved, and the difficulties of data processing and transfer, an important issue for current computer and geo-science experts is developing internet technology to implement rapid remotely sensed image processing. Computational grids are able to solve this problem effectively. These networks of computer workstations enable the sharing of data and resources, and are used by computer experts to solve imbalances of network resources and lopsided usage. In China, computational grids combined with spatial-information-processing technology have formed a new technology: namely, spatial-information grids. In the field of remotely sensed images, spatial-information grids work more effectively for network computing, data processing, resource sharing, task cooperation and so on. This paper focuses mainly on the application of computational grids to digital-image processing. Firstly, we describe the architecture of digital-image processing on the basis of computational grids, its implementation is then discussed in detail with respect to the technology of middleware. The whole network-based intelligent image-processing system is evaluated on the basis of the experimental analysis of remotely sensed image-processing tasks; the results confirm the feasibility of the application of computational grids to digital-image processing.
Thread concept for automatic task parallelization in image analysis
NASA Astrophysics Data System (ADS)
Lueckenhaus, Maximilian; Eckstein, Wolfgang
1998-09-01
Parallel processing of image analysis tasks is an essential method to speed up image processing and helps to exploit the full capacity of distributed systems. However, writing parallel code is a difficult and time-consuming process and often leads to an architecture-dependent program that has to be re-implemented when changing the hardware. Therefore it is highly desirable to do the parallelization automatically. For this we have developed a special kind of thread concept for image analysis tasks. Threads derivated from one subtask may share objects and run in the same context but may process different threads of execution and work on different data in parallel. In this paper we describe the basics of our thread concept and show how it can be used as basis of an automatic task parallelization to speed up image processing. We further illustrate the design and implementation of an agent-based system that uses image analysis threads for generating and processing parallel programs by taking into account the available hardware. The tests made with our system prototype show that the thread concept combined with the agent paradigm is suitable to speed up image processing by an automatic parallelization of image analysis tasks.
MilxXplore: a web-based system to explore large imaging datasets
Bourgeat, P; Dore, V; Villemagne, V L; Rowe, C C; Salvado, O; Fripp, J
2013-01-01
Objective As large-scale medical imaging studies are becoming more common, there is an increasing reliance on automated software to extract quantitative information from these images. As the size of the cohorts keeps increasing with large studies, there is a also a need for tools that allow results from automated image processing and analysis to be presented in a way that enables fast and efficient quality checking, tagging and reporting on cases in which automatic processing failed or was problematic. Materials and methods MilxXplore is an open source visualization platform, which provides an interface to navigate and explore imaging data in a web browser, giving the end user the opportunity to perform quality control and reporting in a user friendly, collaborative and efficient way. Discussion Compared to existing software solutions that often provide an overview of the results at the subject's level, MilxXplore pools the results of individual subjects and time points together, allowing easy and efficient navigation and browsing through the different acquisitions of a subject over time, and comparing the results against the rest of the population. Conclusions MilxXplore is fast, flexible and allows remote quality checks of processed imaging data, facilitating data sharing and collaboration across multiple locations, and can be easily integrated into a cloud computing pipeline. With the growing trend of open data and open science, such a tool will become increasingly important to share and publish results of imaging analysis. PMID:23775173
Zhou, Zhi; Arce, Gonzalo R; Di Crescenzo, Giovanni
2006-08-01
Visual cryptography encodes a secret binary image (SI) into n shares of random binary patterns. If the shares are xeroxed onto transparencies, the secret image can be visually decoded by superimposing a qualified subset of transparencies, but no secret information can be obtained from the superposition of a forbidden subset. The binary patterns of the n shares, however, have no visual meaning and hinder the objectives of visual cryptography. Extended visual cryptography [1] was proposed recently to construct meaningful binary images as shares using hypergraph colourings, but the visual quality is poor. In this paper, a novel technique named halftone visual cryptography is proposed to achieve visual cryptography via halftoning. Based on the blue-noise dithering principles, the proposed method utilizes the void and cluster algorithm [2] to encode a secret binary image into n halftone shares (images) carrying significant visual information. The simulation shows that the visual quality of the obtained halftone shares are observably better than that attained by any available visual cryptography method known to date.
Gurcan, Metin N; Tomaszewski, John; Overton, James A; Doyle, Scott; Ruttenberg, Alan; Smith, Barry
2017-02-01
Interoperability across data sets is a key challenge for quantitative histopathological imaging. There is a need for an ontology that can support effective merging of pathological image data with associated clinical and demographic data. To foster organized, cross-disciplinary, information-driven collaborations in the pathological imaging field, we propose to develop an ontology to represent imaging data and methods used in pathological imaging and analysis, and call it Quantitative Histopathological Imaging Ontology - QHIO. We apply QHIO to breast cancer hot-spot detection with the goal of enhancing reliability of detection by promoting the sharing of data between image analysts. Copyright © 2016 Elsevier Inc. All rights reserved.
Bones, body parts, and sex appeal: An analysis of #thinspiration images on popular social media.
Ghaznavi, Jannath; Taylor, Laramie D
2015-06-01
The present study extends research on thinspiration images, visual and/or textual images intended to inspire weight loss, from pro-eating disorder websites to popular photo-sharing social media websites. The article reports on a systematic content analysis of thinspiration images (N=300) on Twitter and Pinterest. Images tended to be sexually suggestive and objectifying with a focus on ultra-thin, bony, scantily-clad women. Results indicated that particular social media channels and labels (i.e., tags) were characterized by more segmented, bony content and greater social endorsement compared to others. In light of theories of media influence, results offer insight into the potentially harmful effects of exposure to sexually suggestive and objectifying content in large online communities on body image, quality of life, and mental health. Copyright © 2015 Elsevier Ltd. All rights reserved.
CIAN - Cell Imaging and Analysis Network at the Biology Department of McGill University
Lacoste, J.; Lesage, G.; Bunnell, S.; Han, H.; Küster-Schöck, E.
2010-01-01
CF-31 The Cell Imaging and Analysis Network (CIAN) provides services and tools to researchers in the field of cell biology from within or outside Montreal's McGill University community. CIAN is composed of six scientific platforms: Cell Imaging (confocal and fluorescence microscopy), Proteomics (2-D protein gel electrophoresis and DiGE, fluorescent protein analysis), Automation and High throughput screening (Pinning robot and liquid handler), Protein Expression for Antibody Production, Genomics (real-time PCR), and Data storage and analysis (cluster, server, and workstations). Users submit project proposals, and can obtain training and consultation in any aspect of the facility, or initiate projects with the full-service platforms. CIAN is designed to facilitate training, enhance interactions, as well as share and maintain resources and expertise.
O'Connor, Brian D.; Yuen, Denis; Chung, Vincent; Duncan, Andrew G.; Liu, Xiang Kun; Patricia, Janice; Paten, Benedict; Stein, Lincoln; Ferretti, Vincent
2017-01-01
As genomic datasets continue to grow, the feasibility of downloading data to a local organization and running analysis on a traditional compute environment is becoming increasingly problematic. Current large-scale projects, such as the ICGC PanCancer Analysis of Whole Genomes (PCAWG), the Data Platform for the U.S. Precision Medicine Initiative, and the NIH Big Data to Knowledge Center for Translational Genomics, are using cloud-based infrastructure to both host and perform analysis across large data sets. In PCAWG, over 5,800 whole human genomes were aligned and variant called across 14 cloud and HPC environments; the processed data was then made available on the cloud for further analysis and sharing. If run locally, an operation at this scale would have monopolized a typical academic data centre for many months, and would have presented major challenges for data storage and distribution. However, this scale is increasingly typical for genomics projects and necessitates a rethink of how analytical tools are packaged and moved to the data. For PCAWG, we embraced the use of highly portable Docker images for encapsulating and sharing complex alignment and variant calling workflows across highly variable environments. While successful, this endeavor revealed a limitation in Docker containers, namely the lack of a standardized way to describe and execute the tools encapsulated inside the container. As a result, we created the Dockstore ( https://dockstore.org), a project that brings together Docker images with standardized, machine-readable ways of describing and running the tools contained within. This service greatly improves the sharing and reuse of genomics tools and promotes interoperability with similar projects through emerging web service standards developed by the Global Alliance for Genomics and Health (GA4GH). PMID:28344774
O'Connor, Brian D; Yuen, Denis; Chung, Vincent; Duncan, Andrew G; Liu, Xiang Kun; Patricia, Janice; Paten, Benedict; Stein, Lincoln; Ferretti, Vincent
2017-01-01
As genomic datasets continue to grow, the feasibility of downloading data to a local organization and running analysis on a traditional compute environment is becoming increasingly problematic. Current large-scale projects, such as the ICGC PanCancer Analysis of Whole Genomes (PCAWG), the Data Platform for the U.S. Precision Medicine Initiative, and the NIH Big Data to Knowledge Center for Translational Genomics, are using cloud-based infrastructure to both host and perform analysis across large data sets. In PCAWG, over 5,800 whole human genomes were aligned and variant called across 14 cloud and HPC environments; the processed data was then made available on the cloud for further analysis and sharing. If run locally, an operation at this scale would have monopolized a typical academic data centre for many months, and would have presented major challenges for data storage and distribution. However, this scale is increasingly typical for genomics projects and necessitates a rethink of how analytical tools are packaged and moved to the data. For PCAWG, we embraced the use of highly portable Docker images for encapsulating and sharing complex alignment and variant calling workflows across highly variable environments. While successful, this endeavor revealed a limitation in Docker containers, namely the lack of a standardized way to describe and execute the tools encapsulated inside the container. As a result, we created the Dockstore ( https://dockstore.org), a project that brings together Docker images with standardized, machine-readable ways of describing and running the tools contained within. This service greatly improves the sharing and reuse of genomics tools and promotes interoperability with similar projects through emerging web service standards developed by the Global Alliance for Genomics and Health (GA4GH).
Thompkins, Andie M.; Deshpande, Gopikrishna; Waggoner, Paul; Katz, Jeffrey S.
2017-01-01
Neuroimaging of the domestic dog is a rapidly expanding research topic in terms of the cognitive domains being investigated. Because dogs have shared both a physical and social world with humans for thousands of years, they provide a unique and socially relevant means of investigating a variety of shared human and canine psychological phenomena. Additionally, their trainability allows for neuroimaging to be carried out noninvasively in an awake and unrestrained state. In this review, a brief overview of functional magnetic resonance imaging (fMRI) is followed by an analysis of recent research with dogs using fMRI. Methodological and conceptual concerns found across multiple studies are raised, and solutions to these issues are suggested. With the research capabilities brought by canine functional imaging, findings may improve our understanding of canine cognitive processes, identify neural correlates of behavioral traits, and provide early-life selection measures for dogs in working roles. PMID:29456781
Theory and applications of structured light single pixel imaging
NASA Astrophysics Data System (ADS)
Stokoe, Robert J.; Stockton, Patrick A.; Pezeshki, Ali; Bartels, Randy A.
2018-02-01
Many single-pixel imaging techniques have been developed in recent years. Though the methods of image acquisition vary considerably, the methods share unifying features that make general analysis possible. Furthermore, the methods developed thus far are based on intuitive processes that enable simple and physically-motivated reconstruction algorithms, however, this approach may not leverage the full potential of single-pixel imaging. We present a general theoretical framework of single-pixel imaging based on frame theory, which enables general, mathematically rigorous analysis. We apply our theoretical framework to existing single-pixel imaging techniques, as well as provide a foundation for developing more-advanced methods of image acquisition and reconstruction. The proposed frame theoretic framework for single-pixel imaging results in improved noise robustness, decrease in acquisition time, and can take advantage of special properties of the specimen under study. By building on this framework, new methods of imaging with a single element detector can be developed to realize the full potential associated with single-pixel imaging.
The sharing of radiological images by professional mixed martial arts fighters on social media.
Rahmani, George; Joyce, Cormac W; McCarthy, Peter
2017-06-01
Mixed martial arts is a sport that has recently enjoyed a significant increase in popularity. This rise in popularity has catapulted many of these "cage fighters" into stardom and many regularly use social media to reach out to their fans. An interesting result of this interaction on social media is that athletes are sharing images of their radiological examinations when they sustain an injury. To review instances where mixed martial arts fighters shared images of their radiological examinations on social media and in what context they were shared. An Internet search was performed using the Google search engine. Search terms included "MMA," "mixed martial arts," "injury," "scan," "X-ray," "fracture," and "break." Articles which discussed injuries to MMA fighters were examined and those in which the fighter themselves shared a radiological image of their injury on social media were identified. During our search, we identified 20 MMA fighters that had shared radiological images of their injuries on social media. There were 15 different types of injury, with a fracture of the mid-shaft of the ulna being the most common. The most popular social media platform was Twitter. The most common imaging modality X-ray (71%). The majority of injuries were sustained during competition (81%) and 35% of these fights resulted in a win for the fighter. Professional mixed martial artists are sharing radiological images of their injuries on social media. This may be in an attempt to connect with fans and raise their profile among other fighters.
An Adaptive Insertion and Promotion Policy for Partitioned Shared Caches
NASA Astrophysics Data System (ADS)
Mahrom, Norfadila; Liebelt, Michael; Raof, Rafikha Aliana A.; Daud, Shuhaizar; Hafizah Ghazali, Nur
2018-03-01
Cache replacement policies in chip multiprocessors (CMP) have been investigated extensively and proven able to enhance shared cache management. However, competition among multiple processors executing different threads that require simultaneous access to a shared memory may cause cache contention and memory coherence problems on the chip. These issues also exist due to some drawbacks of the commonly used Least Recently Used (LRU) policy employed in multiprocessor systems, which are because of the cache lines residing in the cache longer than required. In image processing analysis of for example extra pulmonary tuberculosis (TB), an accurate diagnosis for tissue specimen is required. Therefore, a fast and reliable shared memory management system to execute algorithms for processing vast amount of specimen image is needed. In this paper, the effects of the cache replacement policy in a partitioned shared cache are investigated. The goal is to quantify whether better performance can be achieved by using less complex replacement strategies. This paper proposes a Middle Insertion 2 Positions Promotion (MI2PP) policy to eliminate cache misses that could adversely affect the access patterns and the throughput of the processors in the system. The policy employs a static predefined insertion point, near distance promotion, and the concept of ownership in the eviction policy to effectively improve cache thrashing and to avoid resource stealing among the processors.
Huang, Shuo; Liu, Jing
2010-05-01
Application of clinical digital medical imaging has raised many tough issues to tackle, such as data storage, management, and information sharing. Here we investigated a mobile phone based medical image management system which is capable of achieving personal medical imaging information storage, management and comprehensive health information analysis. The technologies related to the management system spanning the wireless transmission technology, the technical capabilities of phone in mobile health care and management of mobile medical database were discussed. Taking medical infrared images transmission between phone and computer as an example, the working principle of the present system was demonstrated.
Design and Implementation of CNEOST Image Database Based on NoSQL System
NASA Astrophysics Data System (ADS)
Wang, X.
2013-07-01
The China Near Earth Object Survey Telescope (CNEOST) is the largest Schmidt telescope in China, and it has acquired more than 3 TB astronomical image data since it saw the first light in 2006. After the upgradation of the CCD camera in 2013, over 10 TB data will be obtained every year. The management of massive images is not only an indispensable part of data processing pipeline but also the basis of data sharing. Based on the analysis of requirement, an image management system is designed and implemented by employing the non-relational database.
Design and Implementation of CNEOST Image Database Based on NoSQL System
NASA Astrophysics Data System (ADS)
Wang, Xin
2014-04-01
The China Near Earth Object Survey Telescope is the largest Schmidt telescope in China, and it has acquired more than 3 TB astronomical image data since it saw the first light in 2006. After the upgrade of the CCD camera in 2013, over 10 TB data will be obtained every year. The management of the massive images is not only an indispensable part of data processing pipeline but also the basis of data sharing. Based on the analysis of requirement, an image management system is designed and implemented by employing the non-relational database.
Optical threshold secret sharing scheme based on basic vector operations and coherence superposition
NASA Astrophysics Data System (ADS)
Deng, Xiaopeng; Wen, Wei; Mi, Xianwu; Long, Xuewen
2015-04-01
We propose, to our knowledge for the first time, a simple optical algorithm for secret image sharing with the (2,n) threshold scheme based on basic vector operations and coherence superposition. The secret image to be shared is firstly divided into n shadow images by use of basic vector operations. In the reconstruction stage, the secret image can be retrieved by recording the intensity of the coherence superposition of any two shadow images. Compared with the published encryption techniques which focus narrowly on information encryption, the proposed method can realize information encryption as well as secret sharing, which further ensures the safety and integrality of the secret information and prevents power from being kept centralized and abused. The feasibility and effectiveness of the proposed method are demonstrated by numerical results.
Open source tools for fluorescent imaging.
Hamilton, Nicholas A
2012-01-01
As microscopy becomes increasingly automated and imaging expands in the spatial and time dimensions, quantitative analysis tools for fluorescent imaging are becoming critical to remove both bottlenecks in throughput as well as fully extract and exploit the information contained in the imaging. In recent years there has been a flurry of activity in the development of bio-image analysis tools and methods with the result that there are now many high-quality, well-documented, and well-supported open source bio-image analysis projects with large user bases that cover essentially every aspect from image capture to publication. These open source solutions are now providing a viable alternative to commercial solutions. More importantly, they are forming an interoperable and interconnected network of tools that allow data and analysis methods to be shared between many of the major projects. Just as researchers build on, transmit, and verify knowledge through publication, open source analysis methods and software are creating a foundation that can be built upon, transmitted, and verified. Here we describe many of the major projects, their capabilities, and features. We also give an overview of the current state of open source software for fluorescent microscopy analysis and the many reasons to use and develop open source methods. Copyright © 2012 Elsevier Inc. All rights reserved.
ImTK: an open source multi-center information management toolkit
NASA Astrophysics Data System (ADS)
Alaoui, Adil; Ingeholm, Mary Lou; Padh, Shilpa; Dorobantu, Mihai; Desai, Mihir; Cleary, Kevin; Mun, Seong K.
2008-03-01
The Information Management Toolkit (ImTK) Consortium is an open source initiative to develop robust, freely available tools related to the information management needs of basic, clinical, and translational research. An open source framework and agile programming methodology can enable distributed software development while an open architecture will encourage interoperability across different environments. The ISIS Center has conceptualized a prototype data sharing network that simulates a multi-center environment based on a federated data access model. This model includes the development of software tools to enable efficient exchange, sharing, management, and analysis of multimedia medical information such as clinical information, images, and bioinformatics data from multiple data sources. The envisioned ImTK data environment will include an open architecture and data model implementation that complies with existing standards such as Digital Imaging and Communications (DICOM), Health Level 7 (HL7), and the technical framework and workflow defined by the Integrating the Healthcare Enterprise (IHE) Information Technology Infrastructure initiative, mainly the Cross Enterprise Document Sharing (XDS) specifications.
NASA Astrophysics Data System (ADS)
Liu, Xiyao; Lou, Jieting; Wang, Yifan; Du, Jingyu; Zou, Beiji; Chen, Yan
2018-03-01
Authentication and copyright identification are two critical security issues for medical images. Although zerowatermarking schemes can provide durable, reliable and distortion-free protection for medical images, the existing zerowatermarking schemes for medical images still face two problems. On one hand, they rarely considered the distinguishability for medical images, which is critical because different medical images are sometimes similar to each other. On the other hand, their robustness against geometric attacks, such as cropping, rotation and flipping, is insufficient. In this study, a novel discriminative and robust zero-watermarking (DRZW) is proposed to address these two problems. In DRZW, content-based features of medical images are first extracted based on completed local binary pattern (CLBP) operator to ensure the distinguishability and robustness, especially against geometric attacks. Then, master shares and ownership shares are generated from the content-based features and watermark according to (2,2) visual cryptography. Finally, the ownership shares are stored for authentication and copyright identification. For queried medical images, their content-based features are extracted and master shares are generated. Their watermarks for authentication and copyright identification are recovered by stacking the generated master shares and stored ownership shares. 200 different medical images of 5 types are collected as the testing data and our experimental results demonstrate that DRZW ensures both the accuracy and reliability of authentication and copyright identification. When fixing the false positive rate to 1.00%, the average value of false negative rates by using DRZW is only 1.75% under 20 common attacks with different parameters.
Foolad, Negar; Ornelas, Jennifer N; Clark, Ashley K; Ali, Ifrah; Sharon, Victoria R; Al Mubarak, Luluah; Lopez, Andrés; Alikhan, Ali; Al Dabagh, Bishr; Firooz, Alireza; Awasthi, Smita; Liu, Yu; Li, Chin-Shang; Sivamani, Raja K
2017-09-01
Cloud-based image sharing technology allows facilitated sharing of images. Cloud-based image sharing technology has not been well-studied for acne assessments or treatment preferences, among international evaluators. We evaluated inter-rater variability of acne grading and treatment recommendations among an international group of dermatologists that assessed photographs. This is a prospective, single visit photographic study to assess inter-rater agreement of acne photographs shared through an integrated mobile device, cloud-based, and HIPAA-compliant platform. Inter-rater agreements for global acne assessment and acne lesion counts were evaluated by the Kendall's coefficient of concordance while correlations between treatment recommendations and acne severity were calculated by Spearman's rank correlation coefficient. There was good agreement for the evaluation of inflammatory lesions (KCC = 0.62, P < 0.0001), noninflammatory lesions (KCC = 0.62, P < 0.0001), and the global acne grading system score (KCC = 0.69, P < 0.0001). Topical retinoid, oral antibiotic, and isotretinoin treatment preferences correlated with photographic based acne severity. Our study supports the use of mobile phone based photography and cloud-based image sharing for acne assessment. Cloud-based sharing may facilitate acne care and research among international collaborators. © 2017 The International Society of Dermatology.
Fang, Yu-Hua Dean; Asthana, Pravesh; Salinas, Cristian; Huang, Hsuan-Ming; Muzic, Raymond F
2010-01-01
An integrated software package, Compartment Model Kinetic Analysis Tool (COMKAT), is presented in this report. COMKAT is an open-source software package with many functions for incorporating pharmacokinetic analysis in molecular imaging research and has both command-line and graphical user interfaces. With COMKAT, users may load and display images, draw regions of interest, load input functions, select kinetic models from a predefined list, or create a novel model and perform parameter estimation, all without having to write any computer code. For image analysis, COMKAT image tool supports multiple image file formats, including the Digital Imaging and Communications in Medicine (DICOM) standard. Image contrast, zoom, reslicing, display color table, and frame summation can be adjusted in COMKAT image tool. It also displays and automatically registers images from 2 modalities. Parametric imaging capability is provided and can be combined with the distributed computing support to enhance computation speeds. For users without MATLAB licenses, a compiled, executable version of COMKAT is available, although it currently has only a subset of the full COMKAT capability. Both the compiled and the noncompiled versions of COMKAT are free for academic research use. Extensive documentation, examples, and COMKAT itself are available on its wiki-based Web site, http://comkat.case.edu. Users are encouraged to contribute, sharing their experience, examples, and extensions of COMKAT. With integrated functionality specifically designed for imaging and kinetic modeling analysis, COMKAT can be used as a software environment for molecular imaging and pharmacokinetic analysis.
High speed quantitative digital microscopy
NASA Technical Reports Server (NTRS)
Castleman, K. R.; Price, K. H.; Eskenazi, R.; Ovadya, M. M.; Navon, M. A.
1984-01-01
Modern digital image processing hardware makes possible quantitative analysis of microscope images at high speed. This paper describes an application to automatic screening for cervical cancer. The system uses twelve MC6809 microprocessors arranged in a pipeline multiprocessor configuration. Each processor executes one part of the algorithm on each cell image as it passes through the pipeline. Each processor communicates with its upstream and downstream neighbors via shared two-port memory. Thus no time is devoted to input-output operations as such. This configuration is expected to be at least ten times faster than previous systems.
A Metaphorical Insight into Educational Planning.
ERIC Educational Resources Information Center
Inbar, Dan E.
1991-01-01
Considers educational planning as a communication of shared symbols creating intent geared toward change. Elaborates 11 groups of metaphorical images of planning (as circle, recipe, compass, map, puzzle, tree, maze, Ariadne's thread, prediction, art, and spider's web) bridging impression and expression. Relates this metaphorical analysis to…
Yaniv, Ziv; Lowekamp, Bradley C; Johnson, Hans J; Beare, Richard
2018-06-01
Modern scientific endeavors increasingly require team collaborations to construct and interpret complex computational workflows. This work describes an image-analysis environment that supports the use of computational tools that facilitate reproducible research and support scientists with varying levels of software development skills. The Jupyter notebook web application is the basis of an environment that enables flexible, well-documented, and reproducible workflows via literate programming. Image-analysis software development is made accessible to scientists with varying levels of programming experience via the use of the SimpleITK toolkit, a simplified interface to the Insight Segmentation and Registration Toolkit. Additional features of the development environment include user friendly data sharing using online data repositories and a testing framework that facilitates code maintenance. SimpleITK provides a large number of examples illustrating educational and research-oriented image analysis workflows for free download from GitHub under an Apache 2.0 license: github.com/InsightSoftwareConsortium/SimpleITK-Notebooks .
IHE cross-enterprise document sharing for imaging: design challenges
NASA Astrophysics Data System (ADS)
Noumeir, Rita
2006-03-01
Integrating the Healthcare Enterprise (IHE) has recently published a new integration profile for sharing documents between multiple enterprises. The Cross-Enterprise Document Sharing Integration Profile (XDS) lays the basic framework for deploying regional and national Electronic Health Record (EHR). This profile proposes an architecture based on a central Registry that holds metadata information describing published Documents residing in one or multiple Documents Repositories. As medical images constitute important information of the patient health record, it is logical to extend the XDS Integration Profile to include images. However, including images in the EHR presents many challenges. The complete image set is very large; it is useful for radiologists and other specialists such as surgeons and orthopedists. The imaging report, on the other hand, is widely needed and its broad accessibility is vital for achieving optimal patient care. Moreover, a subset of relevant images may also be of wide interest along with the report. Therefore, IHE recently published a new integration profile for sharing images and imaging reports between multiple enterprises. This new profile, the Cross-Enterprise Document Sharing for Imaging (XDS-I), is based on the XDS architecture. The XDS-I integration solution that is published as part of the IHE Technical Framework is the result of an extensive investigation effort of several design solutions. This paper presents and discusses the design challenges and the rationales behind the design decisions of the IHE XDS-I Integration Profile, for a better understanding and appreciation of the final published solution.
Fedorov, Andriy; Clunie, David; Ulrich, Ethan; Bauer, Christian; Wahle, Andreas; Brown, Bartley; Onken, Michael; Riesmeier, Jörg; Pieper, Steve; Kikinis, Ron; Buatti, John; Beichel, Reinhard R
2016-01-01
Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM(®)) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions. Conversion and visualization tools utilizing this toolkit were developed. The encoded objects were validated for consistency and interoperability. The resulting dataset was deposited in the QIN-HEADNECK collection of The Cancer Imaging Archive (TCIA). Supporting tools for data analysis and DICOM conversion were made available as free open-source software. Discussion. We presented a detailed investigation of the development and application of the DICOM model, as well as the supporting open-source tools and toolkits, to accommodate representation of the research data in QI biomarker development. We demonstrated that the DICOM standard can be used to represent the types of data relevant in HNC QI biomarker development, and encode their complex relationships. The resulting annotated objects are amenable to data mining applications, and are interoperable with a variety of systems that support the DICOM standard.
IHE profiles applied to regional PACS.
Fernandez-Bayó, Josep
2011-05-01
PACS has been widely adopted as an image storage solution that perfectly fits the radiology department workflow and that can be easily extended to other hospital departments. Integrations with other hospital systems, like the Radiology Information System, the Hospital Information System and the Electronic Patient Record are fully achieved but still challenging aims. PACS also creates the perfect environment for teleradiology and teleworking setups. One step further is the regional PACS concept where different hospitals or health care enterprises share the images in an integrated Electronic Patient Record. Among the different solutions available to share images between different hospitals IHE (Integrating the Healthcare Enterprise) organization presents the Cross Enterprise Document Sharing profile (XDS) which allows sharing images from different hospitals even if they have different PACS vendors. Adopting XDS has multiple advantages, images do not need to be duplicated in a central archive to be shared among the different healthcare enterprises, they only need to be indexed and published in a central document registry. In the XDS profile IHE defines the mechanisms to publish and index the images in the central document registry. It also defines the mechanisms that each hospital will use to retrieve those images regardless on the Hospital PACS they are stored. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
The sharing of radiological images by professional mixed martial arts fighters on social media
Joyce, Cormac W; McCarthy, Peter
2017-01-01
Background Mixed martial arts is a sport that has recently enjoyed a significant increase in popularity. This rise in popularity has catapulted many of these “cage fighters” into stardom and many regularly use social media to reach out to their fans. An interesting result of this interaction on social media is that athletes are sharing images of their radiological examinations when they sustain an injury. Purpose To review instances where mixed martial arts fighters shared images of their radiological examinations on social media and in what context they were shared. Material and Methods An Internet search was performed using the Google search engine. Search terms included “MMA,” “mixed martial arts,” “injury,” “scan,” “X-ray,” “fracture,” and “break.” Articles which discussed injuries to MMA fighters were examined and those in which the fighter themselves shared a radiological image of their injury on social media were identified. Results During our search, we identified 20 MMA fighters that had shared radiological images of their injuries on social media. There were 15 different types of injury, with a fracture of the mid-shaft of the ulna being the most common. The most popular social media platform was Twitter. The most common imaging modality X-ray (71%). The majority of injuries were sustained during competition (81%) and 35% of these fights resulted in a win for the fighter. Conclusion Professional mixed martial artists are sharing radiological images of their injuries on social media. This may be in an attempt to connect with fans and raise their profile among other fighters. PMID:28717518
Developing national on-line services to annotate and analyse underwater imagery in a research cloud
NASA Astrophysics Data System (ADS)
Proctor, R.; Langlois, T.; Friedman, A.; Davey, B.
2017-12-01
Fish image annotation data is currently collected by various research, management and academic institutions globally (+100,000's hours of deployments) with varying degrees of standardisation and limited formal collaboration or data synthesis. We present a case study of how national on-line services, developed within a domain-oriented research cloud, have been used to annotate habitat images and synthesise fish annotation data sets collected using Autonomous Underwater Vehicles (AUVs) and baited remote underwater stereo-video (stereo-BRUV). Two developing software tools have been brought together in the marine science cloud to provide marine biologists with a powerful service for image annotation. SQUIDLE+ is an online platform designed for exploration, management and annotation of georeferenced images & video data. It provides a flexible annotation framework allowing users to work with their preferred annotation schemes. We have used SQUIDLE+ to sample the habitat composition and complexity of images of the benthos collected using stereo-BRUV. GlobalArchive is designed to be a centralised repository of aquatic ecological survey data with design principles including ease of use, secure user access, flexible data import, and the collection of any sampling and image analysis information. To easily share and synthesise data we have implemented data sharing protocols, including Open Data and synthesis Collaborations, and a spatial map to explore global datasets and filter to create a synthesis. These tools in the science cloud, together with a virtual desktop analysis suite offering python and R environments offer an unprecedented capability to deliver marine biodiversity information of value to marine managers and scientists alike.
Image, word, action: interpersonal dynamics in a photo-sharing community.
Suler, John
2008-10-01
In online photo-sharing communities, the individual's expression of self and the relationships that evolve among members is determined by the kinds of images that are shared, by the words exchanged among members, and by interpersonal actions that do not specifically rely on images or text. This article examines the dynamics of personal expression via images in Flickr, including a proposed system for identifying the dimensions of imagistic communication and a discussion of the psychological meanings embedded in a sequence of images. It explores how photographers use text descriptors to supplement their images and how different types of comments on photographs influence interpersonal relationships. The "fav"--when members choose an image as one of their favorites--is examined as one type of action that can serve a variety of interpersonal functions. Although images play a powerful role in the expression of self, it is the integration of images, words, and actions that maximize the development of relationships.
Gollub, Randy L.; Shoemaker, Jody M.; King, Margaret D.; White, Tonya; Ehrlich, Stefan; Sponheim, Scott R.; Clark, Vincent P.; Turner, Jessica A.; Mueller, Bryon A.; Magnotta, Vince; O’Leary, Daniel; Ho, Beng C.; Brauns, Stefan; Manoach, Dara S.; Seidman, Larry; Bustillo, Juan R.; Lauriello, John; Bockholt, Jeremy; Lim, Kelvin O.; Rosen, Bruce R.; Schulz, S. Charles; Calhoun, Vince D.; Andreasen, Nancy C.
2013-01-01
Expertly collected, well-curated data sets consisting of comprehensive clinical characterization and raw structural, functional and diffusion-weighted DICOM images in schizophrenia patients and sex and age-matched controls are now accessible to the scientific community through an on-line data repository (coins.mrn.org). The Mental Illness and Neuroscience Discovery Institute, now the Mind Research Network (MRN, www.mrn.org), comprised of investigators at the University of New Mexico, the University of Minnesota, Massachusetts General Hospital, and the University of Iowa, conducted a cross-sectional study to identify quantitative neuroimaging biomarkers of schizophrenia. Data acquisition across multiple sites permitted the integration and cross-validation of clinical, cognitive, morphometric, and functional neuroimaging results gathered from unique samples of schizophrenia patients and controls using a common protocol across sites. Particular effort was made to recruit patients early in the course of their illness, at the onset of their symptoms. There is a relatively even sampling of illness duration in chronic patients. This data repository will be useful to 1) scientists who can study schizophrenia by further analysis of this cohort and/or by pooling with other data; 2) computer scientists and software algorithm developers for testing and validating novel registration, segmentation, and other analysis software; and 3) educators in the fields of neuroimaging, medical image analysis and medical imaging informatics who need exemplar data sets for courses and workshops. Sharing provides the opportunity for independent replication of already published results from this data set and novel exploration. This manuscript describes the inclusion/exclusion criteria, imaging parameters and other information that will assist those wishing to use this data repository. PMID:23760817
Gollub, Randy L; Shoemaker, Jody M; King, Margaret D; White, Tonya; Ehrlich, Stefan; Sponheim, Scott R; Clark, Vincent P; Turner, Jessica A; Mueller, Bryon A; Magnotta, Vince; O'Leary, Daniel; Ho, Beng C; Brauns, Stefan; Manoach, Dara S; Seidman, Larry; Bustillo, Juan R; Lauriello, John; Bockholt, Jeremy; Lim, Kelvin O; Rosen, Bruce R; Schulz, S Charles; Calhoun, Vince D; Andreasen, Nancy C
2013-07-01
Expertly collected, well-curated data sets consisting of comprehensive clinical characterization and raw structural, functional and diffusion-weighted DICOM images in schizophrenia patients and sex and age-matched controls are now accessible to the scientific community through an on-line data repository (coins.mrn.org). The Mental Illness and Neuroscience Discovery Institute, now the Mind Research Network (MRN, http://www.mrn.org/ ), comprised of investigators at the University of New Mexico, the University of Minnesota, Massachusetts General Hospital, and the University of Iowa, conducted a cross-sectional study to identify quantitative neuroimaging biomarkers of schizophrenia. Data acquisition across multiple sites permitted the integration and cross-validation of clinical, cognitive, morphometric, and functional neuroimaging results gathered from unique samples of schizophrenia patients and controls using a common protocol across sites. Particular effort was made to recruit patients early in the course of their illness, at the onset of their symptoms. There is a relatively even sampling of illness duration in chronic patients. This data repository will be useful to 1) scientists who can study schizophrenia by further analysis of this cohort and/or by pooling with other data; 2) computer scientists and software algorithm developers for testing and validating novel registration, segmentation, and other analysis software; and 3) educators in the fields of neuroimaging, medical image analysis and medical imaging informatics who need exemplar data sets for courses and workshops. Sharing provides the opportunity for independent replication of already published results from this data set and novel exploration. This manuscript describes the inclusion/exclusion criteria, imaging parameters and other information that will assist those wishing to use this data repository.
Lang, Jun
2012-01-30
In this paper, we propose a novel secure image sharing scheme based on Shamir's three-pass protocol and the multiple-parameter fractional Fourier transform (MPFRFT), which can safely exchange information with no advance distribution of either secret keys or public keys between users. The image is encrypted directly by the MPFRFT spectrum without the use of phase keys, and information can be shared by transmitting the encrypted image (or message) three times between users. Numerical simulation results are given to verify the performance of the proposed algorithm.
An Accurate Framework for Arbitrary View Pedestrian Detection in Images
NASA Astrophysics Data System (ADS)
Fan, Y.; Wen, G.; Qiu, S.
2018-01-01
We consider the problem of detect pedestrian under from images collected under various viewpoints. This paper utilizes a novel framework called locality-constrained affine subspace coding (LASC). Firstly, the positive training samples are clustered into similar entities which represent similar viewpoint. Then Principal Component Analysis (PCA) is used to obtain the shared feature of each viewpoint. Finally, the samples that can be reconstructed by linear approximation using their top- k nearest shared feature with a small error are regarded as a correct detection. No negative samples are required for our method. Histograms of orientated gradient (HOG) features are used as the feature descriptors, and the sliding window scheme is adopted to detect humans in images. The proposed method exploits the sparse property of intrinsic information and the correlations among the multiple-views samples. Experimental results on the INRIA and SDL human datasets show that the proposed method achieves a higher performance than the state-of-the-art methods in form of effect and efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryan, C.G.; De Geronimo, G.; Kirkham, R.
2009-11-13
The fundamental parameter method for quantitative SXRF and PIXE analysis and imaging using the dynamic analysis method is extended to model the changing X-ray yields and detector sensitivity with angle across large detector arrays. The method is implemented in the GeoPIXE software and applied to cope with the large solid-angle of the new Maia 384 detector array and its 96 detector prototype developed by CSIRO and BNL for SXRF imaging applications at the Australian and NSLS synchrotrons. Peak-to-background is controlled by mitigating charge-sharing between detectors through careful optimization of a patterned molybdenum absorber mask. A geological application demonstrates the capabilitymore » of the method to produce high definition elemental images up to {approx}100 M pixels in size.« less
A QR Code Based Zero-Watermarking Scheme for Authentication of Medical Images in Teleradiology Cloud
Seenivasagam, V.; Velumani, R.
2013-01-01
Healthcare institutions adapt cloud based archiving of medical images and patient records to share them efficiently. Controlled access to these records and authentication of images must be enforced to mitigate fraudulent activities and medical errors. This paper presents a zero-watermarking scheme implemented in the composite Contourlet Transform (CT)—Singular Value Decomposition (SVD) domain for unambiguous authentication of medical images. Further, a framework is proposed for accessing patient records based on the watermarking scheme. The patient identification details and a link to patient data encoded into a Quick Response (QR) code serves as the watermark. In the proposed scheme, the medical image is not subjected to degradations due to watermarking. Patient authentication and authorized access to patient data are realized on combining a Secret Share with the Master Share constructed from invariant features of the medical image. The Hu's invariant image moments are exploited in creating the Master Share. The proposed system is evaluated with Checkmark software and is found to be robust to both geometric and non geometric attacks. PMID:23970943
Seenivasagam, V; Velumani, R
2013-01-01
Healthcare institutions adapt cloud based archiving of medical images and patient records to share them efficiently. Controlled access to these records and authentication of images must be enforced to mitigate fraudulent activities and medical errors. This paper presents a zero-watermarking scheme implemented in the composite Contourlet Transform (CT)-Singular Value Decomposition (SVD) domain for unambiguous authentication of medical images. Further, a framework is proposed for accessing patient records based on the watermarking scheme. The patient identification details and a link to patient data encoded into a Quick Response (QR) code serves as the watermark. In the proposed scheme, the medical image is not subjected to degradations due to watermarking. Patient authentication and authorized access to patient data are realized on combining a Secret Share with the Master Share constructed from invariant features of the medical image. The Hu's invariant image moments are exploited in creating the Master Share. The proposed system is evaluated with Checkmark software and is found to be robust to both geometric and non geometric attacks.
The Virginia Slims identity crisis: an inside look at tobacco industry marketing to women.
Toll, B A; Ling, P M
2005-06-01
Because no prior studies have comprehensively analysed previously secret tobacco industry documents describing marketing female brands, the Virginia Slims brand was studied to explore how Philip Morris and competitors develop and adapt promotional campaigns targeting women. Analysis of previously secret tobacco industry documents. The majority of the documents used were from Philip Morris. The key to Virginia Slims advertising was creating an aspirational image which women associated with the brand. Virginia Slims co-opted women's liberation slogans to build a modern female image from 1968 through to the 1980s, and its market share grew from 0.24% to 3.16% during that time period. Ironically, the feminist image that worked very well for the brand was also the reason for its subsequent problems. Philip Morris experienced unprecedented losses in market share in the early 1990s, with a decline in market share for four consecutive years from 3.16% to 2.26%; they attributed this decline to both the fact that the brand's feminist image no longer appealed to young women aged 18-24 years, and increased competition from more contemporary and lower priced competitors. Throughout the 1990s, attempts to reacquire young women while retaining Virginia Slims loyal (now older) smokers were made using a "King Size" line extension, new slogans, and loyalty building promotions. Tobacco advertisers initially created distinct female brands with aspirational images; continued appeal to young women was critical for long term growth. The need for established brands to evolve to maintain relevance to young women creates an opportunity for tobacco counter-marketing, which should undermine tobacco brand imagery and promote aspirational smoke-free lifestyle images. Young women age 18-24 are extremely valuable to the tobacco industry and should be a focus for tobacco control programmes.
Dialoguing with Dreams in Existential Art Therapy
ERIC Educational Resources Information Center
Moon, Bruce L.
2007-01-01
This article presents a theoretical and methodological framework for interactive dialogue and analysis of dream images in existential art therapy. In this phenomenological-existential approach, the client and art therapist are regarded as equal partners with respect to sharing in the process of creation and discovery of meaning (Frankl, 1955,…
The Alchemy of Mathematical Experience: A Psychoanalysis of Student Writings.
ERIC Educational Resources Information Center
Early, Robert E.
1992-01-01
Shares a psychological look at student images of mathematical learning and problem solving through students' writings about mathematical experiences. The analysis is done from a Jungian psychoanalytic orientation with the goal of assisting students develop a deeper perspective from which to view their mathematics experience. (MDH)
Display, Identity and the Everyday: Self-Presentation through Online Image Sharing
ERIC Educational Resources Information Center
Davies, Julia
2007-01-01
Drawing on a study of a photo-sharing website (Flickr.com), this paper explores ways in which everyday life is reconfigured through an online photo-sharing space, where traditional boundaries between the public and private spheres are being extended, challenged or eroded. The paper reflects on the presentation and subjects of the images; the…
MIDG-Emerging grid technologies for multi-site preclinical molecular imaging research communities.
Lee, Jasper; Documet, Jorge; Liu, Brent; Park, Ryan; Tank, Archana; Huang, H K
2011-03-01
Molecular imaging is the visualization and identification of specific molecules in anatomy for insight into metabolic pathways, tissue consistency, and tracing of solute transport mechanisms. This paper presents the Molecular Imaging Data Grid (MIDG) which utilizes emerging grid technologies in preclinical molecular imaging to facilitate data sharing and discovery between preclinical molecular imaging facilities and their collaborating investigator institutions to expedite translational sciences research. Grid-enabled archiving, management, and distribution of animal-model imaging datasets help preclinical investigators to monitor, access and share their imaging data remotely, and promote preclinical imaging facilities to share published imaging datasets as resources for new investigators. The system architecture of the Molecular Imaging Data Grid is described in a four layer diagram. A data model for preclinical molecular imaging datasets is also presented based on imaging modalities currently used in a molecular imaging center. The MIDG system components and connectivity are presented. And finally, the workflow steps for grid-based archiving, management, and retrieval of preclincial molecular imaging data are described. Initial performance tests of the Molecular Imaging Data Grid system have been conducted at the USC IPILab using dedicated VMware servers. System connectivity, evaluated datasets, and preliminary results are presented. The results show the system's feasibility, limitations, direction of future research. Translational and interdisciplinary research in medicine is increasingly interested in cellular and molecular biology activity at the preclinical levels, utilizing molecular imaging methods on animal models. The task of integrated archiving, management, and distribution of these preclinical molecular imaging datasets at preclinical molecular imaging facilities is challenging due to disparate imaging systems and multiple off-site investigators. A Molecular Imaging Data Grid design, implementation, and initial evaluation is presented to demonstrate the secure and novel data grid solution for sharing preclinical molecular imaging data across the wide-area-network (WAN).
Jonas, Monique; Malpas, Phillipa; Kersey, Kate; Merry, Alan; Bagg, Warwick
2017-01-27
To develop a policy governing the taking and sharing of photographic and radiological images by medical students. The Rules of the Health Information Privacy Code 1994 and the Code of Health and Disability Services Consumers' Rights were applied to the taking, storing and sharing of photographic and radiological images by medical students. Stakeholders, including clinicians, medical students, lawyers at district health boards in the Auckland region, the Office of the Privacy Commissioner and the Health and Disability Commissioner were consulted and their recommendations incorporated. The policy 'Taking and Sharing Images of Patients' sets expectations of students in relation to: photographs taken for the purpose of providing care; photographs taken for educational or professional practice purposes and photographic or radiological images used for educational or professional practice purposes. In addition, it prohibits students from uploading images of patients onto image-sharing apps such as Figure 1. The policy has since been extended to apply to all students at the Faculty of Medical and Health Sciences at the University of Auckland. Technology-driven evolutions in practice necessitate regular review to ensure compliance with existing legal regulations and ethical frameworks. This policy offers a starting point for healthcare providers to review their own policies and practice, with a view to ensuring that patients' trust in the treatment that their health information receives is upheld.
Shi, Bitao; Bourne, Jennifer; Harris, Kristen M
2011-03-01
Serial section electron microscopy (ssEM) is rapidly expanding as a primary tool to investigate synaptic circuitry and plasticity. The ultrastructural images collected through ssEM are content rich and their comprehensive analysis is beyond the capacity of an individual laboratory. Hence, sharing ultrastructural data is becoming crucial to visualize, analyze, and discover the structural basis of synaptic circuitry and function in the brain. We devised a web-based management system called SynapticDB (http://synapses.clm.utexas.edu/synapticdb/) that catalogues, extracts, analyzes, and shares experimental data from ssEM. The management strategy involves a library with check-in, checkout and experimental tracking mechanisms. We developed a series of spreadsheet templates (MS Excel, Open Office spreadsheet, etc) that guide users in methods of data collection, structural identification, and quantitative analysis through ssEM. SynapticDB provides flexible access to complete templates, or to individual columns with instructional headers that can be selected to create user-defined templates. New templates can also be generated and uploaded. Research progress is tracked via experimental note management and dynamic PDF forms that allow new investigators to follow standard protocols and experienced researchers to expand the range of data collected and shared. The combined use of templates and tracking notes ensures that the supporting experimental information is populated into the database and associated with the appropriate ssEM images and analyses. We anticipate that SynapticDB will serve future meta-analyses towards new discoveries about the composition and circuitry of neurons and glia, and new understanding about structural plasticity during development, behavior, learning, memory, and neuropathology.
Image Processing for Bioluminescence Resonance Energy Transfer Measurement-BRET-Analyzer.
Chastagnier, Yan; Moutin, Enora; Hemonnot, Anne-Laure; Perroy, Julie
2017-01-01
A growing number of tools now allow live recordings of various signaling pathways and protein-protein interaction dynamics in time and space by ratiometric measurements, such as Bioluminescence Resonance Energy Transfer (BRET) Imaging. Accurate and reproducible analysis of ratiometric measurements has thus become mandatory to interpret quantitative imaging. In order to fulfill this necessity, we have developed an open source toolset for Fiji- BRET-Analyzer -allowing a systematic analysis, from image processing to ratio quantification. We share this open source solution and a step-by-step tutorial at https://github.com/ychastagnier/BRET-Analyzer. This toolset proposes (1) image background subtraction, (2) image alignment over time, (3) a composite thresholding method of the image used as the denominator of the ratio to refine the precise limits of the sample, (4) pixel by pixel division of the images and efficient distribution of the ratio intensity on a pseudocolor scale, and (5) quantification of the ratio mean intensity and standard variation among pixels in chosen areas. In addition to systematize the analysis process, we show that the BRET-Analyzer allows proper reconstitution and quantification of the ratiometric image in time and space, even from heterogeneous subcellular volumes. Indeed, analyzing twice the same images, we demonstrate that compared to standard analysis BRET-Analyzer precisely define the luminescent specimen limits, enlightening proficient strengths from small and big ensembles over time. For example, we followed and quantified, in live, scaffold proteins interaction dynamics in neuronal sub-cellular compartments including dendritic spines, for half an hour. In conclusion, BRET-Analyzer provides a complete, versatile and efficient toolset for automated reproducible and meaningful image ratio analysis.
Cheng, Chihwen; Stokes, Todd H.; Hang, Sovandy; Wang, May D.
2016-01-01
Doctors need fast and convenient access to medical data. This motivates the use of mobile devices for knowledge retrieval and sharing. We have developed TissueWikiMobile on the Apple iPhone and iPad to seamlessly access TissueWiki, an enormous repository of medical histology images. TissueWiki is a three terabyte database of antibody information and histology images from the Human Protein Atlas (HPA). Using TissueWikiMobile, users are capable of extracting knowledge from protein expression, adding annotations to highlight regions of interest on images, and sharing their professional insight. By providing an intuitive human computer interface, users can efficiently operate TissueWikiMobile to access important biomedical data without losing mobility. TissueWikiMobile furnishes the health community a ubiquitous way to collaborate and share their expert opinions not only on the performance of various antibodies stains but also on histology image annotation. PMID:27532057
Huang, Mingbo; Hu, Ding; Yu, Donglan; Zheng, Zhensheng; Wang, Kuijian
2011-12-01
Enhanced extracorporeal counterpulsation (EECP) information consists of both text and hemodynamic waveform data. At present EECP text information has been successfully managed through Web browser, while the management and sharing of hemodynamic waveform data through Internet has not been solved yet. In order to manage EECP information completely, based on the in-depth analysis of EECP hemodynamic waveform file of digital imaging and communications in medicine (DICOM) format and its disadvantages in Internet sharing, we proposed the use of the extensible markup language (XML), which is currently the Internet popular data exchange standard, as the storage specification for the sharing of EECP waveform data. Then we designed a web-based sharing system of EECP hemodynamic waveform data via ASP. NET 2.0 platform. Meanwhile, we specifically introduced the four main system function modules and their implement methods, including DICOM to XML conversion module, EECP waveform data management module, retrieval and display of EECP waveform module and the security mechanism of the system.
Providing Epistemic Support For Assessments Through Mobile-Supported Sharing Activities1
Raclaw, Joshua; Robles, Jessica S.; DiDomenico, Stephen M.
2017-01-01
This paper examines how participants in face-to-face conversation employ mobile phones as a resource for social action. We focus on what we call mobile-supported sharing activities, in which participants use a mobile phone to share text or images with others by voicing text aloud from their mobile or providing others with visual access to the device’s display screen. Drawing from naturalistic video recordings, we focus on how mobile-supported sharing activities invite assessments by providing access to an object that is not locally accessible to the participants. Such practices make relevant co-participants’ assessment of these objects and allow for different forms of co-participation across sequence types. We additionally examine how the organization of assessments during these sharing activities displays sensitivity to preference structure. The analysis illustrates the relevance of embodiment, local objects, and new communicative technologies to the production of action in co-present interaction. Data are in American English. PMID:28936031
Social Media and Total Joint Arthroplasty: An Analysis of Patient Utilization on Instagram.
Ramkumar, Prem N; Navarro, Sergio M; Haeberle, Heather S; Chughtai, Morad; Flynn, Megan E; Mont, Michael A
2017-09-01
The purpose of this study was to analyze the nature of shared content of total joint arthroplasty patients on Instagram. Specifically, we evaluated social media posts for: (1) perspective and timing; (2) tone; (3) focus (activities of daily living [ADLs], rehabilitation, return-to-work); and (4) the comparison between hip and knee arthroplasties. A search of the public Instagram domain was performed over a 6-month period. Total hip and knee arthroplasties (THA and TKA) were selected for the analysis using the following terms: "#totalhipreplacement," "#totalkneereplacement," and associated terms. 1287 individual public posts of human subjects were shared during the period. A categorical scoring system was utilized for media format (photo or video), time (preoperative, perioperative, or postoperative) period, tone (positive or negative), return-to-work, ADLs, rehabilitation, surgical site, radiograph image, satisfaction, and dissatisfaction. Ninety-one percent of the posts were shared during the postoperative period. Ninety-three percent of posts had a positive tone. Thirty-four percent of posts focused on both ADLs and 33.8% on rehabilitation. TKA patients shared more about their surgical site (14.5% vs 3.3%, P < .001) and rehabilitation (58.9% vs 8.8%, P < .001) than THA patients, whereas THA patients shared more about ADLs than TKA patients (60.5% vs 7.6%, P < .001). When sharing their experience on Instagram, arthroplasty patients did so with a positive tone, starting a week after surgery. TKA posts focused more on rehabilitation and wound healing than THA patients, whereas THA patients shared more posts on ADLs. The analysis of social media posts provides insight into what matters to patients after total joint arthroplasty. Copyright © 2017 Elsevier Inc. All rights reserved.
A web-portal for interactive data exploration, visualization, and hypothesis testing
Bartsch, Hauke; Thompson, Wesley K.; Jernigan, Terry L.; Dale, Anders M.
2014-01-01
Clinical research studies generate data that need to be shared and statistically analyzed by their participating institutions. The distributed nature of research and the different domains involved present major challenges to data sharing, exploration, and visualization. The Data Portal infrastructure was developed to support ongoing research in the areas of neurocognition, imaging, and genetics. Researchers benefit from the integration of data sources across domains, the explicit representation of knowledge from domain experts, and user interfaces providing convenient access to project specific data resources and algorithms. The system provides an interactive approach to statistical analysis, data mining, and hypothesis testing over the lifetime of a study and fulfills a mandate of public sharing by integrating data sharing into a system built for active data exploration. The web-based platform removes barriers for research and supports the ongoing exploration of data. PMID:24723882
Creating Critical Media Analysis Skills.
ERIC Educational Resources Information Center
Fortuna, Carolyn
This paper describes a unit of instruction about media images--how they work, ways they affect people, and means to use them--in which young adolescents learn the consequences of becoming a media-literate consumer. The paper explains that in the unit, divided into 2 "clusters," 110 Boston-area eighth graders share 5 core academic…
Multidimensional Shape Similarity in the Development of Visual Object Classification
ERIC Educational Resources Information Center
Mash, Clay
2006-01-01
The current work examined age differences in the classification of novel object images that vary in continuous dimensions of structural shape. The structural dimensions employed are two that share a privileged status in the visual analysis and representation of objects: the shape of discrete prominent parts and the attachment positions of those…
Integration of digital gross pathology images for enterprise-wide access.
Amin, Milon; Sharma, Gaurav; Parwani, Anil V; Anderson, Ralph; Kolowitz, Brian J; Piccoli, Anthony; Shrestha, Rasu B; Lauro, Gonzalo Romero; Pantanowitz, Liron
2012-01-01
Sharing digital pathology images for enterprise- wide use into a picture archiving and communication system (PACS) is not yet widely adopted. We share our solution and 3-year experience of transmitting such images to an enterprise image server (EIS). Gross pathology images acquired by prosectors were integrated with clinical cases into the laboratory information system's image management module, and stored in JPEG2000 format on a networked image server. Automated daily searches for cases with gross images were used to compile an ASCII text file that was forwarded to a separate institutional Enterprise Digital Imaging and Communications in Medicine (DICOM) Wrapper (EDW) server. Concurrently, an HL7-based image order for these cases was generated, containing the locations of images and patient data, and forwarded to the EDW, which combined data in these locations to generate images with patient data, as required by DICOM standards. The image and data were then "wrapped" according to DICOM standards, transferred to the PACS servers, and made accessible on an institution-wide basis. In total, 26,966 gross images from 9,733 cases were transmitted over the 3-year period from the laboratory information system to the EIS. The average process time for cases with successful automatic uploads (n=9,688) to the EIS was 98 seconds. Only 45 cases (0.5%) failed requiring manual intervention. Uploaded images were immediately available to institution- wide PACS users. Since inception, user feedback has been positive. Enterprise- wide PACS- based sharing of pathology images is feasible, provides useful services to clinical staff, and utilizes existing information system and telecommunications infrastructure. PACS-shared pathology images, however, require a "DICOM wrapper" for multisystem compatibility.
Integration of digital gross pathology images for enterprise-wide access
Amin, Milon; Sharma, Gaurav; Parwani, Anil V.; Anderson, Ralph; Kolowitz, Brian J; Piccoli, Anthony; Shrestha, Rasu B.; Lauro, Gonzalo Romero; Pantanowitz, Liron
2012-01-01
Background: Sharing digital pathology images for enterprise- wide use into a picture archiving and communication system (PACS) is not yet widely adopted. We share our solution and 3-year experience of transmitting such images to an enterprise image server (EIS). Methods: Gross pathology images acquired by prosectors were integrated with clinical cases into the laboratory information system's image management module, and stored in JPEG2000 format on a networked image server. Automated daily searches for cases with gross images were used to compile an ASCII text file that was forwarded to a separate institutional Enterprise Digital Imaging and Communications in Medicine (DICOM) Wrapper (EDW) server. Concurrently, an HL7-based image order for these cases was generated, containing the locations of images and patient data, and forwarded to the EDW, which combined data in these locations to generate images with patient data, as required by DICOM standards. The image and data were then “wrapped” according to DICOM standards, transferred to the PACS servers, and made accessible on an institution-wide basis. Results: In total, 26,966 gross images from 9,733 cases were transmitted over the 3-year period from the laboratory information system to the EIS. The average process time for cases with successful automatic uploads (n=9,688) to the EIS was 98 seconds. Only 45 cases (0.5%) failed requiring manual intervention. Uploaded images were immediately available to institution- wide PACS users. Since inception, user feedback has been positive. Conclusions: Enterprise- wide PACS- based sharing of pathology images is feasible, provides useful services to clinical staff, and utilizes existing information system and telecommunications infrastructure. PACS-shared pathology images, however, require a “DICOM wrapper” for multisystem compatibility. PMID:22530178
Images Encryption Method using Steganographic LSB Method, AES and RSA algorithm
NASA Astrophysics Data System (ADS)
Moumen, Abdelkader; Sissaoui, Hocine
2017-03-01
Vulnerability of communication of digital images is an extremely important issue nowadays, particularly when the images are communicated through insecure channels. To improve communication security, many cryptosystems have been presented in the image encryption literature. This paper proposes a novel image encryption technique based on an algorithm that is faster than current methods. The proposed algorithm eliminates the step in which the secrete key is shared during the encryption process. It is formulated based on the symmetric encryption, asymmetric encryption and steganography theories. The image is encrypted using a symmetric algorithm, then, the secret key is encrypted by means of an asymmetrical algorithm and it is hidden in the ciphered image using a least significant bits steganographic scheme. The analysis results show that while enjoying the faster computation, our method performs close to optimal in terms of accuracy.
NASA Astrophysics Data System (ADS)
Yu, Zhongzhi; Liu, Shaocong; Sun, Shiyi; Kuang, Cuifang; Liu, Xu
2018-06-01
Parallel detection, which can use the additional information of a pinhole plane image taken at every excitation scan position, could be an efficient method to enhance the resolution of a confocal laser scanning microscope. In this paper, we discuss images obtained under different conditions and using different image restoration methods with parallel detection to quantitatively compare the imaging quality. The conditions include different noise levels and different detector array settings. The image restoration methods include linear deconvolution and pixel reassignment with Richard-Lucy deconvolution and with maximum-likelihood estimation deconvolution. The results show that the linear deconvolution share properties such as high-efficiency and the best performance under all different conditions, and is therefore expected to be of use for future biomedical routine research.
Independent component model for cognitive functions of multiple subjects using [15O]H2O PET images.
Park, Hae-Jeong; Kim, Jae-Jin; Youn, Tak; Lee, Dong Soo; Lee, Myung Chul; Kwon, Jun Soo
2003-04-01
An independent component model of multiple subjects' positron emission tomography (PET) images is proposed to explore the overall functional components involved in a task and to explain subject specific variations of metabolic activities under altered experimental conditions utilizing the Independent component analysis (ICA) concept. As PET images represent time-compressed activities of several cognitive components, we derived a mathematical model to decompose functional components from cross-sectional images based on two fundamental hypotheses: (1) all subjects share basic functional components that are common to subjects and spatially independent of each other in relation to the given experimental task, and (2) all subjects share common functional components throughout tasks which are also spatially independent. The variations of hemodynamic activities according to subjects or tasks can be explained by the variations in the usage weight of the functional components. We investigated the plausibility of the model using serial cognitive experiments of simple object perception, object recognition, two-back working memory, and divided attention of a syntactic process. We found that the independent component model satisfactorily explained the functional components involved in the task and discuss here the application of ICA in multiple subjects' PET images to explore the functional association of brain activations. Copyright 2003 Wiley-Liss, Inc.
Moutsatsos, Ioannis K; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J; Jenkins, Jeremy L; Holway, Nicholas; Tallarico, John; Parker, Christian N
2017-03-01
High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an "off-the-shelf," open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community.
Moutsatsos, Ioannis K.; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J.; Jenkins, Jeremy L.; Holway, Nicholas; Tallarico, John; Parker, Christian N.
2016-01-01
High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an “off-the-shelf,” open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community. PMID:27899692
The Function Biomedical Informatics Research Network Data Repository
Keator, David B.; van Erp, Theo G.M.; Turner, Jessica A.; Glover, Gary H.; Mueller, Bryon A.; Liu, Thomas T.; Voyvodic, James T.; Rasmussen, Jerod; Calhoun, Vince D.; Lee, Hyo Jong; Toga, Arthur W.; McEwen, Sarah; Ford, Judith M.; Mathalon, Daniel H.; Diaz, Michele; O’Leary, Daniel S.; Bockholt, H. Jeremy; Gadde, Syam; Preda, Adrian; Wible, Cynthia G.; Stern, Hal S.; Belger, Aysenil; McCarthy, Gregory; Ozyurt, Burak; Potkin, Steven G.
2015-01-01
The Function Biomedical Informatics Research Network (FBIRN) developed methods and tools for conducting multi-scanner functional magnetic resonance imaging (fMRI) studies. Method and tool development were based on two major goals: 1) to assess the major sources of variation in fMRI studies conducted across scanners, including instrumentation, acquisition protocols, challenge tasks, and analysis methods, and 2) to provide a distributed network infrastructure and an associated federated database to host and query large, multi-site, fMRI and clinical datasets. In the process of achieving these goals the FBIRN test bed generated several multi-scanner brain imaging data sets to be shared with the wider scientific community via the BIRN Data Repository (BDR). The FBIRN Phase 1 dataset consists of a traveling subject study of 5 healthy subjects, each scanned on 10 different 1.5 to 4 Tesla scanners. The FBIRN Phase 2 and Phase 3 datasets consist of subjects with schizophrenia or schizoaffective disorder along with healthy comparison subjects scanned at multiple sites. In this paper, we provide concise descriptions of FBIRN’s multi-scanner brain imaging data sets and details about the BIRN Data Repository instance of the Human Imaging Database (HID) used to publicly share the data. PMID:26364863
Clunie, David; Ulrich, Ethan; Bauer, Christian; Wahle, Andreas; Brown, Bartley; Onken, Michael; Riesmeier, Jörg; Pieper, Steve; Kikinis, Ron; Buatti, John; Beichel, Reinhard R.
2016-01-01
Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM®) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions. Conversion and visualization tools utilizing this toolkit were developed. The encoded objects were validated for consistency and interoperability. The resulting dataset was deposited in the QIN-HEADNECK collection of The Cancer Imaging Archive (TCIA). Supporting tools for data analysis and DICOM conversion were made available as free open-source software. Discussion. We presented a detailed investigation of the development and application of the DICOM model, as well as the supporting open-source tools and toolkits, to accommodate representation of the research data in QI biomarker development. We demonstrated that the DICOM standard can be used to represent the types of data relevant in HNC QI biomarker development, and encode their complex relationships. The resulting annotated objects are amenable to data mining applications, and are interoperable with a variety of systems that support the DICOM standard. PMID:27257542
2011-01-01
Background Renewed interest in plant × environment interactions has risen in the post-genomic era. In this context, high-throughput phenotyping platforms have been developed to create reproducible environmental scenarios in which the phenotypic responses of multiple genotypes can be analysed in a reproducible way. These platforms benefit hugely from the development of suitable databases for storage, sharing and analysis of the large amount of data collected. In the model plant Arabidopsis thaliana, most databases available to the scientific community contain data related to genetic and molecular biology and are characterised by an inadequacy in the description of plant developmental stages and experimental metadata such as environmental conditions. Our goal was to develop a comprehensive information system for sharing of the data collected in PHENOPSIS, an automated platform for Arabidopsis thaliana phenotyping, with the scientific community. Description PHENOPSIS DB is a publicly available (URL: http://bioweb.supagro.inra.fr/phenopsis/) information system developed for storage, browsing and sharing of online data generated by the PHENOPSIS platform and offline data collected by experimenters and experimental metadata. It provides modules coupled to a Web interface for (i) the visualisation of environmental data of an experiment, (ii) the visualisation and statistical analysis of phenotypic data, and (iii) the analysis of Arabidopsis thaliana plant images. Conclusions Firstly, data stored in the PHENOPSIS DB are of interest to the Arabidopsis thaliana community, particularly in allowing phenotypic meta-analyses directly linked to environmental conditions on which publications are still scarce. Secondly, data or image analysis modules can be downloaded from the Web interface for direct usage or as the basis for modifications according to new requirements. Finally, the structure of PHENOPSIS DB provides a useful template for the development of other similar databases related to genotype × environment interactions. PMID:21554668
Isse, Kumiko; Lesniak, Andrew; Grama, Kedar; Roysam, Badrinath; Minervini, Martha I.; Demetris, Anthony J
2013-01-01
Conventional histopathology is the gold standard for allograft monitoring, but its value proposition is increasingly questioned. “-Omics” analysis of tissues, peripheral blood and fluids and targeted serologic studies provide mechanistic insights into allograft injury not currently provided by conventional histology. Microscopic biopsy analysis, however, provides valuable and unique information: a) spatial-temporal relationships; b) rare events/cells; c) complex structural context; and d) integration into a “systems” model. Nevertheless, except for immunostaining, no transformative advancements have “modernized” routine microscopy in over 100 years. Pathologists now team with hardware and software engineers to exploit remarkable developments in digital imaging, nanoparticle multiplex staining, and computational image analysis software to bridge the traditional histology - global “–omic” analyses gap. Included are side-by-side comparisons, objective biopsy finding quantification, multiplexing, automated image analysis, and electronic data and resource sharing. Current utilization for teaching, quality assurance, conferencing, consultations, research and clinical trials is evolving toward implementation for low-volume, high-complexity clinical services like transplantation pathology. Cost, complexities of implementation, fluid/evolving standards, and unsettled medical/legal and regulatory issues remain as challenges. Regardless, challenges will be overcome and these technologies will enable transplant pathologists to increase information extraction from tissue specimens and contribute to cross-platform biomarker discovery for improved outcomes. PMID:22053785
A neotropical Miocene pollen database employing image-based search and semantic modeling.
Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W; Jaramillo, Carlos; Shyu, Chi-Ren
2014-08-01
Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sewell, Christopher Meyer
This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.
NASA Astrophysics Data System (ADS)
Malone, Joseph D.; El-Haddad, Mohamed T.; Tye, Logan A.; Majeau, Lucas; Godbout, Nicolas; Rollins, Andrew M.; Boudoux, Caroline; Tao, Yuankai K.
2016-03-01
Scanning laser ophthalmoscopy (SLO) and optical coherence tomography (OCT) benefit clinical diagnostic imaging in ophthalmology by enabling in vivo noninvasive en face and volumetric visualization of retinal structures, respectively. Spectrally encoding methods enable confocal imaging through fiber optics and reduces system complexity. Previous applications in ophthalmic imaging include spectrally encoded confocal scanning laser ophthalmoscopy (SECSLO) and a combined SECSLO-OCT system for image guidance, tracking, and registration. However, spectrally encoded imaging suffers from speckle noise because each spectrally encoded channel is effectively monochromatic. Here, we demonstrate in vivo human retinal imaging using a swept source spectrally encoded scanning laser ophthalmoscope and OCT (SSSESLO- OCT) at 1060 nm. SS-SESLO-OCT uses a shared 100 kHz Axsun swept source, shared scanner and imaging optics, and are detected simultaneously on a shared, dual channel high-speed digitizer. SESLO illumination and detection was performed using the single mode core and multimode inner cladding of a double clad fiber coupler, respectively, to preserve lateral resolution while improving collection efficiency and reducing speckle contrast at the expense of confocality. Concurrent en face SESLO and cross-sectional OCT images were acquired with 1376 x 500 pixels at 200 frames-per-second. Our system design is compact and uses a shared light source, imaging optics, and digitizer, which reduces overall system complexity and ensures inherent co-registration between SESLO and OCT FOVs. En face SESLO images acquired concurrent with OCT cross-sections enables lateral motion tracking and three-dimensional volume registration with broad applications in multivolume OCT averaging, image mosaicking, and intraoperative instrument tracking.
Cloud solution for histopathological image analysis using region of interest based compression.
Kanakatte, Aparna; Subramanya, Rakshith; Delampady, Ashik; Nayak, Rajarama; Purushothaman, Balamuralidhar; Gubbi, Jayavardhana
2017-07-01
Recent technological gains have led to the adoption of innovative cloud based solutions in medical imaging field. Once the medical image is acquired, it can be viewed, modified, annotated and shared on many devices. This advancement is mainly due to the introduction of Cloud computing in medical domain. Tissue pathology images are complex and are normally collected at different focal lengths using a microscope. The single whole slide image contains many multi resolution images stored in a pyramidal structure with the highest resolution image at the base and the smallest thumbnail image at the top of the pyramid. Highest resolution image will be used for tissue pathology diagnosis and analysis. Transferring and storing such huge images is a big challenge. Compression is a very useful and effective technique to reduce the size of these images. As pathology images are used for diagnosis, no information can be lost during compression (lossless compression). A novel method of extracting the tissue region and applying lossless compression on this region and lossy compression on the empty regions has been proposed in this paper. The resulting compression ratio along with lossless compression on tissue region is in acceptable range allowing efficient storage and transmission to and from the Cloud.
Gapped pulses for frequency-swept MRI
NASA Astrophysics Data System (ADS)
Idiyatullin, Djaudat; Corum, Curt; Moeller, Steen; Garwood, Michael
2008-08-01
A recently introduced method called SWIFT (SWeep Imaging with Fourier Transform) is a fundamentally different approach to MRI which is particularly well suited to imaging objects with extremely fast spin-spin relaxation rates. The method exploits a frequency-swept excitation pulse and virtually simultaneous signal acquisition in a time-shared mode. Correlation of the spin system response with the excitation pulse function is used to extract the signals of interest. With SWIFT, image quality is highly dependent on producing uniform and broadband spin excitation. These requirements are satisfied by using frequency-modulated pulses belonging to the hyperbolic secant family (HS n pulses). This article describes the experimental steps needed to properly implement HS n pulses in SWIFT. In addition, properties of HS n pulses in the rapid passage, linear region are investigated, followed by an analysis of the pulses after inserting the "gaps" needed for time-shared excitation and acquisition. Finally, compact expressions are presented to estimate the amplitude and flip angle of the HS n pulses, as well as the relative energy deposited by the SWIFT sequence.
The content of social media's shared images about Ebola: a retrospective study.
Seltzer, E K; Jean, N S; Kramer-Golinkoff, E; Asch, D A; Merchant, R M
2015-09-01
Social media have strongly influenced awareness and perceptions of public health emergencies, but a considerable amount of social media content is now carried through images, rather than just text. This study's objective is to explore how image-sharing platforms are used for information dissemination in public health emergencies. Retrospective review of images posted on two popular image-sharing platforms to characterize public discourse about Ebola. Using the keyword '#ebola' we identified a 1% sample of images posted on Instagram and Flickr across two sequential weeks in November 2014. Images from both platforms were independently coded by two reviewers and characterized by themes. We reviewed 1217 images posted on Instagram and Flickr and identified themes. Nine distinct themes were identified. These included: images of health care workers and professionals [308 (25%)], West Africa [75 (6%)], the Ebola virus [59 (5%)], and artistic renderings of Ebola [64 (5%)]. Also identified were images with accompanying embedded text related to Ebola and associated: facts [68 (6%)], fears [40 (3%)], politics [46 (4%)], and jokes [284 (23%)]. Several [273 (22%)] images were unrelated to Ebola or its sequelae. Instagram images were primarily coded as jokes [255 (42%)] or unrelated [219 (36%)], while Flickr images primarily depicted health care workers and other professionals [281 (46%)] providing care or other services for prevention or treatment. Image sharing platforms are being used for information exchange about public health crises, like Ebola. Use differs by platform and discerning these differences can help inform future uses for health care professionals and researchers seeking to assess public fears and misinformation or provide targeted education/awareness interventions. Copyright © 2015 The Royal Institute of Public Health. All rights reserved.
Orange Button Solar Data Exchange | Energy Analysis | NREL
Orange Button Solar Data Exchange Orange Button Solar Data Exchange The new Orange Button Solar Data Exchange tool serves as an online resource for the solar industry to share, sell, or retrieve solar data and connect with colleagues. Screenshot image of Orange Button data website home page. The
Cancer Slide Digital Archive (CDSA) | Informatics Technology for Cancer Research (ITCR)
The CDSA is a web-based platform to support the sharing, managment and analysis of digital pathology data. The Emory Instance currently hosts over 23,000 images from The Cancer Genome Atlas, and the software is being developed within the ITCR grant to be deployable as a digital pathology platform for other labs and Cancer Institutes.
General-purpose interface bus for multiuser, multitasking computer system
NASA Technical Reports Server (NTRS)
Generazio, Edward R.; Roth, Don J.; Stang, David B.
1990-01-01
The architecture of a multiuser, multitasking, virtual-memory computer system intended for the use by a medium-size research group is described. There are three central processing units (CPU) in the configuration, each with 16 MB memory, and two 474 MB hard disks attached. CPU 1 is designed for data analysis and contains an array processor for fast-Fourier transformations. In addition, CPU 1 shares display images viewed with the image processor. CPU 2 is designed for image analysis and display. CPU 3 is designed for data acquisition and contains 8 GPIB channels and an analog-to-digital conversion input/output interface with 16 channels. Up to 9 users can access the third CPU simultaneously for data acquisition. Focus is placed on the optimization of hardware interfaces and software, facilitating instrument control, data acquisition, and processing.
Gutman, David A.; Dunn, William D.; Cobb, Jake; Stoner, Richard M.; Kalpathy-Cramer, Jayashree; Erickson, Bradley
2014-01-01
Advances in web technologies now allow direct visualization of imaging data sets without necessitating the download of large file sets or the installation of software. This allows centralization of file storage and facilitates image review and analysis. XNATView is a light framework recently developed in our lab to visualize DICOM images stored in The Extensible Neuroimaging Archive Toolkit (XNAT). It consists of a PyXNAT-based framework to wrap around the REST application programming interface (API) and query the data in XNAT. XNATView was developed to simplify quality assurance, help organize imaging data, and facilitate data sharing for intra- and inter-laboratory collaborations. Its zero-footprint design allows the user to connect to XNAT from a web browser, navigate through projects, experiments, and subjects, and view DICOM images with accompanying metadata all within a single viewing instance. PMID:24904399
Massively Multithreaded Maxflow for Image Segmentation on the Cray XMT-2
Bokhari, Shahid H.; Çatalyürek, Ümit V.; Gurcan, Metin N.
2014-01-01
SUMMARY Image segmentation is a very important step in the computerized analysis of digital images. The maxflow mincut approach has been successfully used to obtain minimum energy segmentations of images in many fields. Classical algorithms for maxflow in networks do not directly lend themselves to efficient parallel implementations on contemporary parallel processors. We present the results of an implementation of Goldberg-Tarjan preflow-push algorithm on the Cray XMT-2 massively multithreaded supercomputer. This machine has hardware support for 128 threads in each physical processor, a uniformly accessible shared memory of up to 4 TB and hardware synchronization for each 64 bit word. It is thus well-suited to the parallelization of graph theoretic algorithms, such as preflow-push. We describe the implementation of the preflow-push code on the XMT-2 and present the results of timing experiments on a series of synthetically generated as well as real images. Our results indicate very good performance on large images and pave the way for practical applications of this machine architecture for image analysis in a production setting. The largest images we have run are 320002 pixels in size, which are well beyond the largest previously reported in the literature. PMID:25598745
Reference Pricing, Consumer Cost-Sharing, and Insurer Spending for Advanced Imaging Tests.
Robinson, James C; Whaley, Christopher; Brown, Timothy T
2016-12-01
Fees charged for similar imaging tests often vary dramatically within the same market, leading to wide variation in insurer spending and consumer cost-sharing. Reference pricing is an insurance design that offers good coverage to patients up to a defined contribution limit but requires the patients who select high-priced facilities to pay the remainder out of pocket. To measure the association between implementation of reference pricing and patient choice of facility, test prices, out-of-pocket spending, and insurer spending for advanced imaging (CT and MRI) procedures. Difference-in-differences multivariable analysis of insurance claims data. Study included 4751 employees of a national grocery chain (treatment group) and 23,428 enrollees in the nation's largest private insurance plan (comparison group) that used CT or MRI tests between 2010 and 2013. Patient choice of facility, price paid per test, patient out-of-pocket cost-sharing, and employer spending. Compared with trends in prices paid by insurance enrollees not subject to reference pricing, and after adjusting for characteristics of tests and patients, implementation of reference pricing was associated with a 12.5% (95% CI, -25.0%, 2.1%) reduction in average price paid per test by the end of the second full year of the program for CT scans and a 10.5% (95% CI, -16.9%, 3.6%) for MRIs. Out-of-pocket cost-sharing by patients declined by $71,508 (13.8%). The savings accruing to employees amounted to 45.5% of total savings from reference pricing, with the remainder accruing to the employer. Implementation of reference pricing led to reductions in payments by both employer and employees.
Photogrammetry on glaciers: Old and new knowledge
NASA Astrophysics Data System (ADS)
Pfeffer, W. T.; Welty, E.; O'Neel, S.
2014-12-01
In the past few decades terrestrial photogrammetry has become a widely used tool for glaciological research, brought about in part by the proliferation of high-quality, low-cost digital cameras, dramatic increases in image-processing power of computers, and very innovative progress in image processing, much of which has come from computer vision research and from the computer gaming industry. At present, glaciologists have developed their capacity to gather images much further than their ability to process them. Many researchers have accumulated vast inventories of imagery, but have no efficient means to extract the data they desire from them. In many cases these are single-image time series where the processing limitation lies in the paucity of methods to obtain 3-dimension object space information from measurements in the 2-dimensional image space; in other cases camera pairs have been operated but no automated means is in hand for conventional stereometric analysis of many thousands of image pairs. Often the processing task is further complicated by weak camera geometry or ground control distribution, either of which will compromise the quality of 3-dimensional object space solutions. Solutions exist for many of these problems, found sometimes among the latest computer vision results, and sometimes buried in decades-old pre-digital terrestrial photogrammetric literature. Other problems, particularly those arising from poorly constrained or underdetermined camera and ground control geometry, may be unsolvable. Small-scale, ground-based photography and photogrammetry of glaciers has grown over the past few decades in an organic and disorganized fashion, with much duplication of effort and little coordination or sharing of knowledge among researchers. Given the utility of terrestrial photogrammetry, its low cost (if properly developed and implemented), and the substantial value of the information to be had from it, some further effort to share knowledge and methods would be a great benefit for the community. We consider some of the main problems to be solved, and aspects of how optimal knowledge sharing might be accomplished.
The Virginia Slims identity crisis: an inside look at tobacco industry marketing to women
Toll, B; Ling, P
2005-01-01
Objectives: Because no prior studies have comprehensively analysed previously secret tobacco industry documents describing marketing female brands, the Virginia Slims brand was studied to explore how Philip Morris and competitors develop and adapt promotional campaigns targeting women. Methods: Analysis of previously secret tobacco industry documents. The majority of the documents used were from Philip Morris. Results: The key to Virginia Slims advertising was creating an aspirational image which women associated with the brand. Virginia Slims co-opted women's liberation slogans to build a modern female image from 1968 through to the 1980s, and its market share grew from 0.24% to 3.16% during that time period. Ironically, the feminist image that worked very well for the brand was also the reason for its subsequent problems. Philip Morris experienced unprecedented losses in market share in the early 1990s, with a decline in market share for four consecutive years from 3.16% to 2.26%; they attributed this decline to both the fact that the brand's feminist image no longer appealed to young women aged 18–24 years, and increased competition from more contemporary and lower priced competitors. Throughout the 1990s, attempts to reacquire young women while retaining Virginia Slims loyal (now older) smokers were made using a "King Size" line extension, new slogans, and loyalty building promotions. Conclusions: Tobacco advertisers initially created distinct female brands with aspirational images; continued appeal to young women was critical for long term growth. The need for established brands to evolve to maintain relevance to young women creates an opportunity for tobacco counter-marketing, which should undermine tobacco brand imagery and promote aspirational smoke-free lifestyle images. Young women age 18–24 are extremely valuable to the tobacco industry and should be a focus for tobacco control programmes. PMID:15923467
Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang
2014-01-01
Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images). Experimental results show very promising performance of our proposed MLPD method.
Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang
2014-01-01
Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images). Experimental results show very promising performance of our proposed MLPD method. PMID:24820966
Automated quantitative muscle biopsy analysis system
NASA Technical Reports Server (NTRS)
Castleman, Kenneth R. (Inventor)
1980-01-01
An automated system to aid the diagnosis of neuromuscular diseases by producing fiber size histograms utilizing histochemically stained muscle biopsy tissue. Televised images of the microscopic fibers are processed electronically by a multi-microprocessor computer, which isolates, measures, and classifies the fibers and displays the fiber size distribution. The architecture of the multi-microprocessor computer, which is iterated to any required degree of complexity, features a series of individual microprocessors P.sub.n each receiving data from a shared memory M.sub.n-1 and outputing processed data to a separate shared memory M.sub.n+1 under control of a program stored in dedicated memory M.sub.n.
Garcia, Justin R; Gesselman, Amanda N; Siliman, Shadia A; Perry, Brea L; Coe, Kathryn; Fisher, Helen E
2016-07-29
Background: The transmission of sexual images and messages via mobile phone or other electronic media (sexting) has been associated with a variety of mostly negative social and behavioural consequences. Research on sexting has focussed on youth, with limited data across demographics and with little known about the sharing of private sexual images and messages with third parties. Methods: The present study examines sexting attitudes and behaviours, including sending, receiving, and sharing of sexual messages and images, across gender, age, and sexual orientation. A total of 5805 single adults were included in the study (2830 women; 2975 men), ranging in age from 21 to 75+ years. Results: Overall, 21% of participants reported sending and 28% reported receiving sexually explicit text messages; both sending and receiving 'sexts' was most common among younger respondents. Although 73.2% of participants reported discomfort with unauthorised sharing of sexts beyond the intended recipient, of those who had received sext images, 22.9% reported sharing them with others (on average with 3.17 friends). Participants also reported concern about the potential consequences of sexting on their social lives, careers, and psychosocial wellbeing. Conclusion: Views on the impact of sexting on reputation suggest a contemporary struggle to reconcile digital eroticism with real-world consequences. These findings suggest a need for future research into negotiations of sexting motivations, risks, and rewards.
Reusable Social Networking Capabilities for an Earth Science Collaboratory
NASA Astrophysics Data System (ADS)
Lynnes, C.; Da Silva, D.; Leptoukh, G. G.; Ramachandran, R.
2011-12-01
A vast untapped resource of data, tools, information and knowledge lies within the Earth science community. This is due to the fact that it is difficult to share the full spectrum of these entities, particularly their full context. As a result, most knowledge exchange is through person-to-person contact at meetings, email and journal articles, each of which can support only a limited level of detail. We propose the creation of an Earth Science Collaboratory (ESC): a framework that would enable sharing of data, tools, workflows, results and the contextual knowledge about these information entities. The Drupal platform is well positioned to provide the key social networking capabilities to the ESC. As a proof of concept of a rich collaboration mechanism, we have developed a Drupal-based mechanism for graphically annotating and commenting on results images from analysis workflows in the online Giovanni analysis system for remote sensing data. The annotations can be tagged and shared with others in the community. These capabilities are further supplemented by a Research Notebook capability reused from another online analysis system named Talkoot. The goal is a reusable set of modules that can integrate with variety of other applications either within Drupal web frameworks or at a machine level.
The Function Biomedical Informatics Research Network Data Repository.
Keator, David B; van Erp, Theo G M; Turner, Jessica A; Glover, Gary H; Mueller, Bryon A; Liu, Thomas T; Voyvodic, James T; Rasmussen, Jerod; Calhoun, Vince D; Lee, Hyo Jong; Toga, Arthur W; McEwen, Sarah; Ford, Judith M; Mathalon, Daniel H; Diaz, Michele; O'Leary, Daniel S; Jeremy Bockholt, H; Gadde, Syam; Preda, Adrian; Wible, Cynthia G; Stern, Hal S; Belger, Aysenil; McCarthy, Gregory; Ozyurt, Burak; Potkin, Steven G
2016-01-01
The Function Biomedical Informatics Research Network (FBIRN) developed methods and tools for conducting multi-scanner functional magnetic resonance imaging (fMRI) studies. Method and tool development were based on two major goals: 1) to assess the major sources of variation in fMRI studies conducted across scanners, including instrumentation, acquisition protocols, challenge tasks, and analysis methods, and 2) to provide a distributed network infrastructure and an associated federated database to host and query large, multi-site, fMRI and clinical data sets. In the process of achieving these goals the FBIRN test bed generated several multi-scanner brain imaging data sets to be shared with the wider scientific community via the BIRN Data Repository (BDR). The FBIRN Phase 1 data set consists of a traveling subject study of 5 healthy subjects, each scanned on 10 different 1.5 to 4 T scanners. The FBIRN Phase 2 and Phase 3 data sets consist of subjects with schizophrenia or schizoaffective disorder along with healthy comparison subjects scanned at multiple sites. In this paper, we provide concise descriptions of FBIRN's multi-scanner brain imaging data sets and details about the BIRN Data Repository instance of the Human Imaging Database (HID) used to publicly share the data. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wu, Chia-Hua; Lee, Suiang-Shyan; Lin, Ja-Chen
2017-06-01
This all-in-one hiding method creates two transparencies that have several decoding options: visual decoding with or without translation flipping and computer decoding. In visual decoding, two less-important (or fake) binary secret images S1 and S2 can be revealed. S1 is viewed by the direct stacking of two transparencies. S2 is viewed by flipping one transparency and translating the other to a specified coordinate before stacking. Finally, important/true secret files can be decrypted by a computer using the information extracted from transparencies. The encoding process to hide this information includes the translated-flip visual cryptography, block types, the ways to use polynomial-style sharing, and linear congruential generator. If a thief obtained both transparencies, which are stored in distinct places, he still needs to find the values of keys used in computer decoding to break through after viewing S1 and/or S2 by stacking. However, the thief might just try every other kind of stacking and finally quit finding more secrets; for computer decoding is totally different from stacking decoding. Unlike traditional image hiding that uses images as host media, our method hides fine gray-level images in binary transparencies. Thus, our host media are transparencies. Comparisons and analysis are provided.
Malone, Joseph D.; El-Haddad, Mohamed T.; Bozic, Ivan; Tye, Logan A.; Majeau, Lucas; Godbout, Nicolas; Rollins, Andrew M.; Boudoux, Caroline; Joos, Karen M.; Patel, Shriji N.; Tao, Yuankai K.
2016-01-01
Scanning laser ophthalmoscopy (SLO) benefits diagnostic imaging and therapeutic guidance by allowing for high-speed en face imaging of retinal structures. When combined with optical coherence tomography (OCT), SLO enables real-time aiming and retinal tracking and provides complementary information for post-acquisition volumetric co-registration, bulk motion compensation, and averaging. However, multimodality SLO-OCT systems generally require dedicated light sources, scanners, relay optics, detectors, and additional digitization and synchronization electronics, which increase system complexity. Here, we present a multimodal ophthalmic imaging system using swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography (SS-SESLO-OCT) for in vivo human retinal imaging. SESLO reduces the complexity of en face imaging systems by multiplexing spatial positions as a function of wavelength. SESLO image quality benefited from single-mode illumination and multimode collection through a prototype double-clad fiber coupler, which optimized scattered light throughput and reduce speckle contrast while maintaining lateral resolution. Using a shared 1060 nm swept-source, shared scanner and imaging optics, and a shared dual-channel high-speed digitizer, we acquired inherently co-registered en face retinal images and OCT cross-sections simultaneously at 200 frames-per-second. PMID:28101411
Adolescents' presentation of food in social media: An explorative study.
Holmberg, Christopher; E Chaplin, John; Hillman, Thomas; Berg, Christina
2016-04-01
The study aimed to explore how adolescents communicate food images in a widely used social media image-sharing application. We examined how and in what context food was presented and the type of food items that were frequently portrayed by following a youth related hashtag on Instagram. The hashtag #14år ("14 years") was used to find adolescent users on Instagram: these users public photo streams were then searched for food items they had shared with others. Food items were identified and categorized based on type of food and how the food items were presented. Most of the adolescent users (85%) shared images containing food items. A majority of the images (67.7%) depicted foods high in calories but low in nutrients. Almost half of these images were arranged as a still life with food brand names clearly exposed. Many of these images were influenced by major food marketing campaigns. Fruits and vegetables occurred in 21.8% of all images. This food group was frequently portrayed zoomed in with focus solely on the food, with a hashtag or caption expressing palatability. These images were often presented in the style of a cook book. Food was thus presented in varied ways. Adolescents themselves produced images copying food advertisements. This has clear health promotion implications since it becomes more challenging to monitor and tackle young people's exposure to marketing of unhealthy foods in these popular online networks because images are part of a lifestyle that the young people want to promote. Shared images contain personal recommendations, which mean that they may have a more powerful effect than commercial advertising. Copyright © 2016 Elsevier Ltd. All rights reserved.
Carnegie Mellon University bioimaging day 2014: Challenges and opportunities in digital pathology
Rohde, Gustavo K.; Ozolek, John A.; Parwani, Anil V.; Pantanowitz, Liron
2014-01-01
Recent advances in digital imaging is impacting the practice of pathology. One of the key enabling technologies that is leading the way towards this transformation is the use of whole slide imaging (WSI) which allows glass slides to be converted into large image files that can be shared, stored, and analyzed rapidly. Many applications around this novel technology have evolved in the last decade including education, research and clinical applications. This publication highlights a collection of abstracts, each corresponding to a talk given at Carnegie Mellon University's (CMU) Bioimaging Day 2014 co-sponsored by the Biomedical Engineering and Lane Center for Computational Biology Departments at CMU. Topics related specifically to digital pathology are presented in this collection of abstracts. These include topics related to digital workflow implementation, imaging and artifacts, storage demands, and automated image analysis algorithms. PMID:25250190
Carnegie Mellon University bioimaging day 2014: Challenges and opportunities in digital pathology.
Rohde, Gustavo K; Ozolek, John A; Parwani, Anil V; Pantanowitz, Liron
2014-01-01
Recent advances in digital imaging is impacting the practice of pathology. One of the key enabling technologies that is leading the way towards this transformation is the use of whole slide imaging (WSI) which allows glass slides to be converted into large image files that can be shared, stored, and analyzed rapidly. Many applications around this novel technology have evolved in the last decade including education, research and clinical applications. This publication highlights a collection of abstracts, each corresponding to a talk given at Carnegie Mellon University's (CMU) Bioimaging Day 2014 co-sponsored by the Biomedical Engineering and Lane Center for Computational Biology Departments at CMU. Topics related specifically to digital pathology are presented in this collection of abstracts. These include topics related to digital workflow implementation, imaging and artifacts, storage demands, and automated image analysis algorithms.
Dorr, Ricardo; Ozu, Marcelo; Parisi, Mario
2007-04-15
Water channels (aquaporins) family members have been identified in central nervous system cells. A classic method to measure membrane water permeability and its regulation is to capture and analyse images of Xenopus laevis oocytes expressing them. Laboratories dedicated to the analysis of motion images usually have powerful equipment valued in thousands of dollars. However, some scientists consider that new approaches are needed to reduce costs in scientific labs, especially in developing countries. The objective of this work is to share a very low-cost hardware and software setup based on a well-selected webcam, a hand-made adapter to a microscope and the use of free software to measure membrane water permeability in Xenopus oocytes. One of the main purposes of this setup is to maintain a high level of quality in images obtained at brief intervals (shorter than 70 ms). The presented setup helps to economize without sacrificing image analysis requirements.
Sala, E; Mema, E; Himoto, Y; Veeraraghavan, H; Brenton, J D; Snyder, A; Weigelt, B; Vargas, H A
2017-01-01
Tumour heterogeneity in cancers has been observed at the histological and genetic levels, and increased levels of intra-tumour genetic heterogeneity have been reported to be associated with adverse clinical outcomes. This review provides an overview of radiomics, radiogenomics, and habitat imaging, and examines the use of these newly emergent fields in assessing tumour heterogeneity and its implications. It reviews the potential value of radiomics and radiogenomics in assisting in the diagnosis of cancer disease and determining cancer aggressiveness. This review discusses how radiogenomic analysis can be further used to guide treatment therapy for individual tumours by predicting drug response and potential therapy resistance and examines its role in developing radiomics as biomarkers of oncological outcomes. Lastly, it provides an overview of the obstacles in these emergent fields today including reproducibility, need for validation, imaging analysis standardisation, data sharing and clinical translatability and offers potential solutions to these challenges towards the realisation of precision oncology. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Isse, K; Lesniak, A; Grama, K; Roysam, B; Minervini, M I; Demetris, A J
2012-01-01
Conventional histopathology is the gold standard for allograft monitoring, but its value proposition is increasingly questioned. "-Omics" analysis of tissues, peripheral blood and fluids and targeted serologic studies provide mechanistic insights into allograft injury not currently provided by conventional histology. Microscopic biopsy analysis, however, provides valuable and unique information: (a) spatial-temporal relationships; (b) rare events/cells; (c) complex structural context; and (d) integration into a "systems" model. Nevertheless, except for immunostaining, no transformative advancements have "modernized" routine microscopy in over 100 years. Pathologists now team with hardware and software engineers to exploit remarkable developments in digital imaging, nanoparticle multiplex staining, and computational image analysis software to bridge the traditional histology-global "-omic" analyses gap. Included are side-by-side comparisons, objective biopsy finding quantification, multiplexing, automated image analysis, and electronic data and resource sharing. Current utilization for teaching, quality assurance, conferencing, consultations, research and clinical trials is evolving toward implementation for low-volume, high-complexity clinical services like transplantation pathology. Cost, complexities of implementation, fluid/evolving standards, and unsettled medical/legal and regulatory issues remain as challenges. Regardless, challenges will be overcome and these technologies will enable transplant pathologists to increase information extraction from tissue specimens and contribute to cross-platform biomarker discovery for improved outcomes. ©Copyright 2011 The American Society of Transplantation and the American Society of Transplant Surgeons.
An Analysis of the Educational Value of Low-Fidelity Anatomy Models as External Representations
ERIC Educational Resources Information Center
Chan, Lap Ki; Cheng, Maurice M. W.
2011-01-01
Although high-fidelity digital models of human anatomy based on actual cross-sectional images of the human body have been developed, reports on the use of physical models in anatomy teaching continue to appear. This article aims to examine the common features shared by these physical models and analyze their educational value based on the…
ERIC Educational Resources Information Center
Webb, Michael
2007-01-01
School students' immersion in a rich entertainment media environment has implications for classroom listening. Increasing interaction among media, design, games, communications and arts fields has led to a growing trend in the creative alignment of music and moving image. Video sharing sites such as YouTube are assisting in the proliferation and…
Functional Heterogeneity and Convergence in the Right Temporoparietal Junction
Lee, Su Mei; McCarthy, Gregory
2016-01-01
The right temporoparietal junction (rTPJ) is engaged by tasks that manipulate biological motion processing, Theory of Mind attributions, and attention reorienting. The proximity of activations elicited by these tasks raises the question of whether these tasks share common cognitive component processes that are subserved by common neural substrates. Here, we used high-resolution whole-brain functional magnetic resonance imaging in a within-subjects design to determine whether these tasks activate common regions of the rTPJ. Each participant was presented with the 3 tasks in the same imaging session. In a whole-brain analysis, we found that only the right and left TPJs were activated by all 3 tasks. Multivoxel pattern analysis revealed that the regions of overlap could still discriminate the 3 tasks. Notably, we found significant cross-task classification in the right TPJ, which suggests a shared neural process between the 3 tasks. Taken together, these results support prior studies that have indicated functional heterogeneity within the rTPJ but also suggest a convergence of function within a region of overlap. These results also call for further investigation into the nature of the function subserved in this overlap region. PMID:25477367
A neotropical Miocene pollen database employing image-based search and semantic modeling1
Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W.; Jaramillo, Carlos; Shyu, Chi-Ren
2014-01-01
• Premise of the study: Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Methods: Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Results: Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Discussion: Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery. PMID:25202648
ibex: An open infrastructure software platform to facilitate collaborative work in radiomics
Zhang, Lifei; Fried, David V.; Fave, Xenia J.; Hunter, Luke A.; Court, Laurence E.
2015-01-01
Purpose: Radiomics, which is the high-throughput extraction and analysis of quantitative image features, has been shown to have considerable potential to quantify the tumor phenotype. However, at present, a lack of software infrastructure has impeded the development of radiomics and its applications. Therefore, the authors developed the imaging biomarker explorer (ibex), an open infrastructure software platform that flexibly supports common radiomics workflow tasks such as multimodality image data import and review, development of feature extraction algorithms, model validation, and consistent data sharing among multiple institutions. Methods: The ibex software package was developed using the matlab and c/c++ programming languages. The software architecture deploys the modern model-view-controller, unit testing, and function handle programming concepts to isolate each quantitative imaging analysis task, to validate if their relevant data and algorithms are fit for use, and to plug in new modules. On one hand, ibex is self-contained and ready to use: it has implemented common data importers, common image filters, and common feature extraction algorithms. On the other hand, ibex provides an integrated development environment on top of matlab and c/c++, so users are not limited to its built-in functions. In the ibex developer studio, users can plug in, debug, and test new algorithms, extending ibex’s functionality. ibex also supports quality assurance for data and feature algorithms: image data, regions of interest, and feature algorithm-related data can be reviewed, validated, and/or modified. More importantly, two key elements in collaborative workflows, the consistency of data sharing and the reproducibility of calculation result, are embedded in the ibex workflow: image data, feature algorithms, and model validation including newly developed ones from different users can be easily and consistently shared so that results can be more easily reproduced between institutions. Results: Researchers with a variety of technical skill levels, including radiation oncologists, physicists, and computer scientists, have found the ibex software to be intuitive, powerful, and easy to use. ibex can be run at any computer with the windows operating system and 1GB RAM. The authors fully validated the implementation of all importers, preprocessing algorithms, and feature extraction algorithms. Windows version 1.0 beta of stand-alone ibex and ibex’s source code can be downloaded. Conclusions: The authors successfully implemented ibex, an open infrastructure software platform that streamlines common radiomics workflow tasks. Its transparency, flexibility, and portability can greatly accelerate the pace of radiomics research and pave the way toward successful clinical translation. PMID:25735289
IBEX: an open infrastructure software platform to facilitate collaborative work in radiomics.
Zhang, Lifei; Fried, David V; Fave, Xenia J; Hunter, Luke A; Yang, Jinzhong; Court, Laurence E
2015-03-01
Radiomics, which is the high-throughput extraction and analysis of quantitative image features, has been shown to have considerable potential to quantify the tumor phenotype. However, at present, a lack of software infrastructure has impeded the development of radiomics and its applications. Therefore, the authors developed the imaging biomarker explorer (IBEX), an open infrastructure software platform that flexibly supports common radiomics workflow tasks such as multimodality image data import and review, development of feature extraction algorithms, model validation, and consistent data sharing among multiple institutions. The IBEX software package was developed using the MATLAB and c/c++ programming languages. The software architecture deploys the modern model-view-controller, unit testing, and function handle programming concepts to isolate each quantitative imaging analysis task, to validate if their relevant data and algorithms are fit for use, and to plug in new modules. On one hand, IBEX is self-contained and ready to use: it has implemented common data importers, common image filters, and common feature extraction algorithms. On the other hand, IBEX provides an integrated development environment on top of MATLAB and c/c++, so users are not limited to its built-in functions. In the IBEX developer studio, users can plug in, debug, and test new algorithms, extending IBEX's functionality. IBEX also supports quality assurance for data and feature algorithms: image data, regions of interest, and feature algorithm-related data can be reviewed, validated, and/or modified. More importantly, two key elements in collaborative workflows, the consistency of data sharing and the reproducibility of calculation result, are embedded in the IBEX workflow: image data, feature algorithms, and model validation including newly developed ones from different users can be easily and consistently shared so that results can be more easily reproduced between institutions. Researchers with a variety of technical skill levels, including radiation oncologists, physicists, and computer scientists, have found the IBEX software to be intuitive, powerful, and easy to use. IBEX can be run at any computer with the windows operating system and 1GB RAM. The authors fully validated the implementation of all importers, preprocessing algorithms, and feature extraction algorithms. Windows version 1.0 beta of stand-alone IBEX and IBEX's source code can be downloaded. The authors successfully implemented IBEX, an open infrastructure software platform that streamlines common radiomics workflow tasks. Its transparency, flexibility, and portability can greatly accelerate the pace of radiomics research and pave the way toward successful clinical translation.
NASA Technical Reports Server (NTRS)
Hickey, J. S.
1983-01-01
The Mesoscale Analysis and Space Sensor (MASS) Data Management and Analysis System developed by Atsuko Computing International (ACI) on the MASS HP-1000 Computer System within the Systems Dynamics Laboratory of the Marshall Space Flight Center is described. The MASS Data Management and Analysis System was successfully implemented and utilized daily by atmospheric scientists to graphically display and analyze large volumes of conventional and satellite derived meteorological data. The scientists can process interactively various atmospheric data (Sounding, Single Level, Gird, and Image) by utilizing the MASS (AVE80) share common data and user inputs, thereby reducing overhead, optimizing execution time, and thus enhancing user flexibility, useability, and understandability of the total system/software capabilities. In addition ACI installed eight APPLE III graphics/imaging computer terminals in individual scientist offices and integrated them into the MASS HP-1000 Computer System thus providing significant enhancement to the overall research environment.
Collaborating and sharing data in epilepsy research.
Wagenaar, Joost B; Worrell, Gregory A; Ives, Zachary; Dümpelmann, Matthias; Matthias, Dümpelmann; Litt, Brian; Schulze-Bonhage, Andreas
2015-06-01
Technological advances are dramatically advancing translational research in Epilepsy. Neurophysiology, imaging, and metadata are now recorded digitally in most centers, enabling quantitative analysis. Basic and translational research opportunities to use these data are exploding, but academic and funding cultures prevent this potential from being realized. Research on epileptogenic networks, antiepileptic devices, and biomarkers could progress rapidly if collaborative efforts to digest this "big neuro data" could be organized. Higher temporal and spatial resolution data are driving the need for novel multidimensional visualization and analysis tools. Crowd-sourced science, the same that drives innovation in computer science, could easily be mobilized for these tasks, were it not for competition for funding, attribution, and lack of standard data formats and platforms. As these efforts mature, there is a great opportunity to advance Epilepsy research through data sharing and increase collaboration between the international research community.
2015-07-06
social media. Here we collected data in the form of social media user’s comments to news articles about hydrofracking (or, fracking ). This topic was...hydrofracking (or, fracking ). This topic was chosen because it is a timely social and political issue. Nearly 300 posts were analyzed with automated linguistic...or, fracking ). Next, we expanded our efforts to address image-based content shared via social media. We opted to collect all images shared via twitter
Constructing Benchmark Databases and Protocols for Medical Image Analysis: Diabetic Retinopathy
Kauppi, Tomi; Kämäräinen, Joni-Kristian; Kalesnykiene, Valentina; Sorri, Iiris; Uusitalo, Hannu; Kälviäinen, Heikki
2013-01-01
We address the performance evaluation practices for developing medical image analysis methods, in particular, how to establish and share databases of medical images with verified ground truth and solid evaluation protocols. Such databases support the development of better algorithms, execution of profound method comparisons, and, consequently, technology transfer from research laboratories to clinical practice. For this purpose, we propose a framework consisting of reusable methods and tools for the laborious task of constructing a benchmark database. We provide a software tool for medical image annotation helping to collect class label, spatial span, and expert's confidence on lesions and a method to appropriately combine the manual segmentations from multiple experts. The tool and all necessary functionality for method evaluation are provided as public software packages. As a case study, we utilized the framework and tools to establish the DiaRetDB1 V2.1 database for benchmarking diabetic retinopathy detection algorithms. The database contains a set of retinal images, ground truth based on information from multiple experts, and a baseline algorithm for the detection of retinopathy lesions. PMID:23956787
Paddock, Stephen W; Eliceiri, Kevin W
2014-01-01
Confocal microscopy is an established light microscopical technique for imaging fluorescently labeled specimens with significant three-dimensional structure. Applications of confocal microscopy in the biomedical sciences include the imaging of the spatial distribution of macromolecules in either fixed or living cells, the automated collection of 3D data, the imaging of multiple labeled specimens and the measurement of physiological events in living cells. The laser scanning confocal microscope continues to be chosen for most routine work although a number of instruments have been developed for more specific applications. Significant improvements have been made to all areas of the confocal approach, not only to the instruments themselves, but also to the protocols of specimen preparation, to the analysis, the display, the reproduction, sharing and management of confocal images using bioinformatics techniques.
BioImg.org: A Catalog of Virtual Machine Images for the Life Sciences
Dahlö, Martin; Haziza, Frédéric; Kallio, Aleksi; Korpelainen, Eija; Bongcam-Rudloff, Erik; Spjuth, Ola
2015-01-01
Virtualization is becoming increasingly important in bioscience, enabling assembly and provisioning of complete computer setups, including operating system, data, software, and services packaged as virtual machine images (VMIs). We present an open catalog of VMIs for the life sciences, where scientists can share information about images and optionally upload them to a server equipped with a large file system and fast Internet connection. Other scientists can then search for and download images that can be run on the local computer or in a cloud computing environment, providing easy access to bioinformatics environments. We also describe applications where VMIs aid life science research, including distributing tools and data, supporting reproducible analysis, and facilitating education. BioImg.org is freely available at: https://bioimg.org. PMID:26401099
BioImg.org: A Catalog of Virtual Machine Images for the Life Sciences.
Dahlö, Martin; Haziza, Frédéric; Kallio, Aleksi; Korpelainen, Eija; Bongcam-Rudloff, Erik; Spjuth, Ola
2015-01-01
Virtualization is becoming increasingly important in bioscience, enabling assembly and provisioning of complete computer setups, including operating system, data, software, and services packaged as virtual machine images (VMIs). We present an open catalog of VMIs for the life sciences, where scientists can share information about images and optionally upload them to a server equipped with a large file system and fast Internet connection. Other scientists can then search for and download images that can be run on the local computer or in a cloud computing environment, providing easy access to bioinformatics environments. We also describe applications where VMIs aid life science research, including distributing tools and data, supporting reproducible analysis, and facilitating education. BioImg.org is freely available at: https://bioimg.org.
A Secure and Efficient Scalable Secret Image Sharing Scheme with Flexible Shadow Sizes.
Xie, Dong; Li, Lixiang; Peng, Haipeng; Yang, Yixian
2017-01-01
In a general (k, n) scalable secret image sharing (SSIS) scheme, the secret image is shared by n participants and any k or more than k participants have the ability to reconstruct it. The scalability means that the amount of information in the reconstructed image scales in proportion to the number of the participants. In most existing SSIS schemes, the size of each image shadow is relatively large and the dealer does not has a flexible control strategy to adjust it to meet the demand of differen applications. Besides, almost all existing SSIS schemes are not applicable under noise circumstances. To address these deficiencies, in this paper we present a novel SSIS scheme based on a brand-new technique, called compressed sensing, which has been widely used in many fields such as image processing, wireless communication and medical imaging. Our scheme has the property of flexibility, which means that the dealer can achieve a compromise between the size of each shadow and the quality of the reconstructed image. In addition, our scheme has many other advantages, including smooth scalability, noise-resilient capability, and high security. The experimental results and the comparison with similar works demonstrate the feasibility and superiority of our scheme.
... R S T U V W X Y Z Image Gallery Share: The Image Gallery contains high-quality digital photographs available from ... Select a category below to view additional thumbnail images. Images are available for direct download in 2 ...
Lowe, H. J.
1993-01-01
This paper describes Image Engine, an object-oriented, microcomputer-based, multimedia database designed to facilitate the storage and retrieval of digitized biomedical still images, video, and text using inexpensive desktop computers. The current prototype runs on Apple Macintosh computers and allows network database access via peer to peer file sharing protocols. Image Engine supports both free text and controlled vocabulary indexing of multimedia objects. The latter is implemented using the TView thesaurus model developed by the author. The current prototype of Image Engine uses the National Library of Medicine's Medical Subject Headings (MeSH) vocabulary (with UMLS Meta-1 extensions) as its indexing thesaurus. PMID:8130596
Managing and Querying Image Annotation and Markup in XML.
Wang, Fusheng; Pan, Tony; Sharma, Ashish; Saltz, Joel
2010-01-01
Proprietary approaches for representing annotations and image markup are serious barriers for researchers to share image data and knowledge. The Annotation and Image Markup (AIM) project is developing a standard based information model for image annotation and markup in health care and clinical trial environments. The complex hierarchical structures of AIM data model pose new challenges for managing such data in terms of performance and support of complex queries. In this paper, we present our work on managing AIM data through a native XML approach, and supporting complex image and annotation queries through native extension of XQuery language. Through integration with xService, AIM databases can now be conveniently shared through caGrid.
Managing and Querying Image Annotation and Markup in XML
Wang, Fusheng; Pan, Tony; Sharma, Ashish; Saltz, Joel
2010-01-01
Proprietary approaches for representing annotations and image markup are serious barriers for researchers to share image data and knowledge. The Annotation and Image Markup (AIM) project is developing a standard based information model for image annotation and markup in health care and clinical trial environments. The complex hierarchical structures of AIM data model pose new challenges for managing such data in terms of performance and support of complex queries. In this paper, we present our work on managing AIM data through a native XML approach, and supporting complex image and annotation queries through native extension of XQuery language. Through integration with xService, AIM databases can now be conveniently shared through caGrid. PMID:21218167
Color extended visual cryptography using error diffusion.
Kang, InKoo; Arce, Gonzalo R; Lee, Heung-Kyu
2011-01-01
Color visual cryptography (VC) encrypts a color secret message into n color halftone image shares. Previous methods in the literature show good results for black and white or gray scale VC schemes, however, they are not sufficient to be applied directly to color shares due to different color structures. Some methods for color visual cryptography are not satisfactory in terms of producing either meaningless shares or meaningful shares with low visual quality, leading to suspicion of encryption. This paper introduces the concept of visual information pixel (VIP) synchronization and error diffusion to attain a color visual cryptography encryption method that produces meaningful color shares with high visual quality. VIP synchronization retains the positions of pixels carrying visual information of original images throughout the color channels and error diffusion generates shares pleasant to human eyes. Comparisons with previous approaches show the superior performance of the new method.
Integration of a neuroimaging processing pipeline into a pan-canadian computing grid
NASA Astrophysics Data System (ADS)
Lavoie-Courchesne, S.; Rioux, P.; Chouinard-Decorte, F.; Sherif, T.; Rousseau, M.-E.; Das, S.; Adalat, R.; Doyon, J.; Craddock, C.; Margulies, D.; Chu, C.; Lyttelton, O.; Evans, A. C.; Bellec, P.
2012-02-01
The ethos of the neuroimaging field is quickly moving towards the open sharing of resources, including both imaging databases and processing tools. As a neuroimaging database represents a large volume of datasets and as neuroimaging processing pipelines are composed of heterogeneous, computationally intensive tools, such open sharing raises specific computational challenges. This motivates the design of novel dedicated computing infrastructures. This paper describes an interface between PSOM, a code-oriented pipeline development framework, and CBRAIN, a web-oriented platform for grid computing. This interface was used to integrate a PSOM-compliant pipeline for preprocessing of structural and functional magnetic resonance imaging into CBRAIN. We further tested the capacity of our infrastructure to handle a real large-scale project. A neuroimaging database including close to 1000 subjects was preprocessed using our interface and publicly released to help the participants of the ADHD-200 international competition. This successful experiment demonstrated that our integrated grid-computing platform is a powerful solution for high-throughput pipeline analysis in the field of neuroimaging.
Qiao, Liang; Li, Ying; Chen, Xin; Yang, Sheng; Gao, Peng; Liu, Hongjun; Feng, Zhengquan; Nian, Yongjian; Qiu, Mingguo
2015-09-01
There are various medical image sharing and electronic whiteboard systems available for diagnosis and discussion purposes. However, most of these systems ask clients to install special software tools or web plug-ins to support whiteboard discussion, special medical image format, and customized decoding algorithm of data transmission of HRIs (high-resolution images). This limits the accessibility of the software running on different devices and operating systems. In this paper, we propose a solution based on pure web pages for medical HRIs lossless sharing and e-whiteboard discussion, and have set up a medical HRI sharing and e-whiteboard system, which has four-layered design: (1) HRIs access layer: we improved an tile-pyramid model named unbalanced ratio pyramid structure (URPS), to rapidly share lossless HRIs and to adapt to the reading habits of users; (2) format conversion layer: we designed a format conversion engine (FCE) on server side to real time convert and cache DICOM tiles which clients requesting with window-level parameters, to make browsers compatible and keep response efficiency to server-client; (3) business logic layer: we built a XML behavior relationship storage structure to store and share users' behavior, to keep real time co-browsing and discussion between clients; (4) web-user-interface layer: AJAX technology and Raphael toolkit were used to combine HTML and JavaScript to build client RIA (rich Internet application), to meet clients' desktop-like interaction on any pure webpage. This system can be used to quickly browse lossless HRIs, and support discussing and co-browsing smoothly on any web browser in a diversified network environment. The proposal methods can provide a way to share HRIs safely, and may be used in the field of regional health, telemedicine and remote education at a low cost. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Reductions in Diagnostic Imaging With High Deductible Health Plans.
Zheng, Sarah; Ren, Zhong Justin; Heineke, Janelle; Geissler, Kimberley H
2016-02-01
Diagnostic imaging utilization grew rapidly over the past 2 decades. It remains unclear whether patient cost-sharing is an effective policy lever to reduce imaging utilization and spending. Using 2010 commercial insurance claims data of >21 million individuals, we compared diagnostic imaging utilization and standardized payments between High Deductible Health Plan (HDHP) and non-HDHP enrollees. Negative binomial models were used to estimate associations between HDHP enrollment and utilization, and were repeated for standardized payments. A Hurdle model were used to estimate associations between HDHP enrollment and whether an enrollee had diagnostic imaging, and then the magnitude of associations for enrollees with imaging. Models with interaction terms were used to estimate associations between HDHP enrollment and imaging by risk score tercile. All models included controls for patient age, sex, geographic location, and health status. HDHP enrollment was associated with a 7.5% decrease in the number of imaging studies and a 10.2% decrease in standardized imaging payments. HDHP enrollees were 1.8% points less likely to use imaging; once an enrollee had at least 1 imaging study, differences in utilization and associated payments were small. Associations between HDHP and utilization were largest in the lowest (least sick) risk score tercile. Increased patient cost-sharing may contribute to reductions in diagnostic imaging utilization and spending. However, increased cost-sharing may not encourage patients to differentiate between high-value and low-value diagnostic imaging services; better patient awareness and education may be a crucial part of any reductions in diagnostic imaging utilization.
A framework for secure and decentralized sharing of medical imaging data via blockchain consensus.
Patel, Vishal
2018-04-01
The electronic sharing of medical imaging data is an important element of modern healthcare systems, but current infrastructure for cross-site image transfer depends on trust in third-party intermediaries. In this work, we examine the blockchain concept, which enables parties to establish consensus without relying on a central authority. We develop a framework for cross-domain image sharing that uses a blockchain as a distributed data store to establish a ledger of radiological studies and patient-defined access permissions. The blockchain framework is shown to eliminate third-party access to protected health information, satisfy many criteria of an interoperable health system, and readily generalize to domains beyond medical imaging. Relative drawbacks of the framework include the complexity of the privacy and security models and an unclear regulatory environment. Ultimately, the large-scale feasibility of such an approach remains to be demonstrated and will depend on a number of factors which we discuss in detail.
Chervenak, Ann L; van Erp, Theo G M; Kesselman, Carl; D'Arcy, Mike; Sobell, Janet; Keator, David; Dahm, Lisa; Murry, Jim; Law, Meng; Hasso, Anton; Ames, Joseph; Macciardi, Fabio; Potkin, Steven G
2012-01-01
Progress in our understanding of brain disorders increasingly relies on the costly collection of large standardized brain magnetic resonance imaging (MRI) data sets. Moreover, the clinical interpretation of brain scans benefits from compare and contrast analyses of scans from patients with similar, and sometimes rare, demographic, diagnostic, and treatment status. A solution to both needs is to acquire standardized, research-ready clinical brain scans and to build the information technology infrastructure to share such scans, along with other pertinent information, across hospitals. This paper describes the design, deployment, and operation of a federated imaging system that captures and shares standardized, de-identified clinical brain images in a federation across multiple institutions. In addition to describing innovative aspects of the system architecture and our initial testing of the deployed infrastructure, we also describe the Standardized Imaging Protocol (SIP) developed for the project and our interactions with the Institutional Review Board (IRB) regarding handling patient data in the federated environment.
Chervenak, Ann L.; van Erp, Theo G.M.; Kesselman, Carl; D’Arcy, Mike; Sobell, Janet; Keator, David; Dahm, Lisa; Murry, Jim; Law, Meng; Hasso, Anton; Ames, Joseph; Macciardi, Fabio; Potkin, Steven G.
2015-01-01
Progress in our understanding of brain disorders increasingly relies on the costly collection of large standardized brain magnetic resonance imaging (MRI) data sets. Moreover, the clinical interpretation of brain scans benefits from compare and contrast analyses of scans from patients with similar, and sometimes rare, demographic, diagnostic, and treatment status. A solution to both needs is to acquire standardized, research-ready clinical brain scans and to build the information technology infrastructure to share such scans, along with other pertinent information, across hospitals. This paper describes the design, deployment, and operation of a federated imaging system that captures and shares standardized, de-identified clinical brain images in a federation across multiple institutions. In addition to describing innovative aspects of the system architecture and our initial testing of the deployed infrastructure, we also describe the Standardized Imaging Protocol (SIP) developed for the project and our interactions with the Institutional Review Board (IRB) regarding handling patient data in the federated environment. PMID:22941984
A novel secret sharing with two users based on joint transform correlator and compressive sensing
NASA Astrophysics Data System (ADS)
Zhao, Tieyu; Chi, Yingying
2018-05-01
Recently, joint transform correlator (JTC) has been widely applied to image encryption and authentication. This paper presents a novel secret sharing scheme with two users based on JTC. Two users must be present during the decryption that the system has high security and reliability. In the scheme, two users use their fingerprints to encrypt plaintext, and they can decrypt only if both of them provide the fingerprints which are successfully authenticated. The linear relationship between the plaintext and ciphertext is broken using the compressive sensing, which can resist existing attacks on JTC. The results of the theoretical analysis and numerical simulation confirm the validity of the system.
Ahmed, Wamiq M; Lenz, Dominik; Liu, Jia; Paul Robinson, J; Ghafoor, Arif
2008-03-01
High-throughput biological imaging uses automated imaging devices to collect a large number of microscopic images for analysis of biological systems and validation of scientific hypotheses. Efficient manipulation of these datasets for knowledge discovery requires high-performance computational resources, efficient storage, and automated tools for extracting and sharing such knowledge among different research sites. Newly emerging grid technologies provide powerful means for exploiting the full potential of these imaging techniques. Efficient utilization of grid resources requires the development of knowledge-based tools and services that combine domain knowledge with analysis algorithms. In this paper, we first investigate how grid infrastructure can facilitate high-throughput biological imaging research, and present an architecture for providing knowledge-based grid services for this field. We identify two levels of knowledge-based services. The first level provides tools for extracting spatiotemporal knowledge from image sets and the second level provides high-level knowledge management and reasoning services. We then present cellular imaging markup language, an extensible markup language-based language for modeling of biological images and representation of spatiotemporal knowledge. This scheme can be used for spatiotemporal event composition, matching, and automated knowledge extraction and representation for large biological imaging datasets. We demonstrate the expressive power of this formalism by means of different examples and extensive experimental results.
Challenges in Sharing Information Effectively: Examples from Command and Control
ERIC Educational Resources Information Center
Sonnenwald, Diane H.
2006-01-01
Introduction: The goal of information sharing is to change a person's image of the world and to develop a shared working understanding. It is an essential component of collaboration. This paper examines barriers to sharing information effectively in dynamic group work situations. Method: Three types of battlefield training simulations were…
Sticking with Your University: The Importance of Satisfaction, Trust, Image, and Shared Values
ERIC Educational Resources Information Center
Schlesinger, Walesska; Cervera, Amparo; Pérez-Cabañero, Carmen
2017-01-01
In a context of increasing competition and financial difficulties for higher education institutions, alumni loyalty is a key factor for survival and success. This study tests a model derived from a relationship marketing perspective to investigate the roles of four variables (brand image, trust, satisfaction, and shared values) in the direct and…
On the Prediction of Flickr Image Popularity by Analyzing Heterogeneous Social Sensory Data.
Aloufi, Samah; Zhu, Shiai; El Saddik, Abdulmotaleb
2017-03-19
The increase in the popularity of social media has shattered the gap between the physical and virtual worlds. The content generated by people or social sensors on social media provides information about users and their living surroundings, which allows us to access a user's preferences, opinions, and interactions. This provides an opportunity for us to understand human behavior and enhance the services provided for both the real and virtual worlds. In this paper, we will focus on the popularity prediction of social images on Flickr, a popular social photo-sharing site, and promote the research on utilizing social sensory data in the context of assisting people to improve their life on the Web. Social data are different from the data collected from physical sensors; in the fact that they exhibit special characteristics that pose new challenges. In addition to their huge quantity, social data are noisy, unstructured, and heterogeneous. Moreover, they involve human semantics and contextual data that require analysis and interpretation based on human behavior. Accordingly, we address the problem of popularity prediction for an image by exploiting three main factors that are important for making an image popular. In particular, we investigate the impact of the image's visual content, where the semantic and sentiment information extracted from the image show an impact on its popularity, as well as the textual information associated with the image, which has a fundamental role in boosting the visibility of the image in the keyword search results. Additionally, we explore social context, such as an image owner's popularity and how it positively influences the image popularity. With a comprehensive study on the effect of the three aspects, we further propose to jointly consider the heterogeneous social sensory data. Experimental results obtained from real-world data demonstrate that the three factors utilized complement each other in obtaining promising results in the prediction of image popularity on social photo-sharing site.
Bao, Shunxing; Weitendorf, Frederick D; Plassard, Andrew J; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A
2017-02-11
The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and non-relevant for medical imaging.
NASA Astrophysics Data System (ADS)
Bao, Shunxing; Weitendorf, Frederick D.; Plassard, Andrew J.; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A.
2017-03-01
The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and nonrelevant for medical imaging.
A De-Identification Pipeline for Ultrasound Medical Images in DICOM Format.
Monteiro, Eriksson; Costa, Carlos; Oliveira, José Luís
2017-05-01
Clinical data sharing between healthcare institutions, and between practitioners is often hindered by privacy protection requirements. This problem is critical in collaborative scenarios where data sharing is fundamental for establishing a workflow among parties. The anonymization of patient information burned in DICOM images requires elaborate processes somewhat more complex than simple de-identification of textual information. Usually, before sharing, there is a need for manual removal of specific areas containing sensitive information in the images. In this paper, we present a pipeline for ultrasound medical image de-identification, provided as a free anonymization REST service for medical image applications, and a Software-as-a-Service to streamline automatic de-identification of medical images, which is freely available for end-users. The proposed approach applies image processing functions and machine-learning models to bring about an automatic system to anonymize medical images. To perform character recognition, we evaluated several machine-learning models, being Convolutional Neural Networks (CNN) selected as the best approach. For accessing the system quality, 500 processed images were manually inspected showing an anonymization rate of 89.2%. The tool can be accessed at https://bioinformatics.ua.pt/dicom/anonymizer and it is available with the most recent version of Google Chrome, Mozilla Firefox and Safari. A Docker image containing the proposed service is also publicly available for the community.
Cardiac image modelling: Breadth and depth in heart disease.
Suinesiaputra, Avan; McCulloch, Andrew D; Nash, Martyn P; Pontre, Beau; Young, Alistair A
2016-10-01
With the advent of large-scale imaging studies and big health data, and the corresponding growth in analytics, machine learning and computational image analysis methods, there are now exciting opportunities for deepening our understanding of the mechanisms and characteristics of heart disease. Two emerging fields are computational analysis of cardiac remodelling (shape and motion changes due to disease) and computational analysis of physiology and mechanics to estimate biophysical properties from non-invasive imaging. Many large cohort studies now underway around the world have been specifically designed based on non-invasive imaging technologies in order to gain new information about the development of heart disease from asymptomatic to clinical manifestations. These give an unprecedented breadth to the quantification of population variation and disease development. Also, for the individual patient, it is now possible to determine biophysical properties of myocardial tissue in health and disease by interpreting detailed imaging data using computational modelling. For these population and patient-specific computational modelling methods to develop further, we need open benchmarks for algorithm comparison and validation, open sharing of data and algorithms, and demonstration of clinical efficacy in patient management and care. The combination of population and patient-specific modelling will give new insights into the mechanisms of cardiac disease, in particular the development of heart failure, congenital heart disease, myocardial infarction, contractile dysfunction and diastolic dysfunction. Copyright © 2016. Published by Elsevier B.V.
Prior, Fred W; Erickson, Bradley J; Tarbox, Lawrence
2007-11-01
The Cancer Bioinformatics Grid (caBIG) program was created by the National Cancer Institute to facilitate sharing of IT infrastructure, data, and applications among the National Cancer Institute-sponsored cancer research centers. The program was launched in February 2004 and now links more than 50 cancer centers. In April 2005, the In Vivo Imaging Workspace was added to promote the use of imaging in cancer clinical trials. At the inaugural meeting, four special interest groups (SIGs) were established. The Software SIG was charged with identifying projects that focus on open-source software for image visualization and analysis. To date, two projects have been defined by the Software SIG. The eXtensible Imaging Platform project has produced a rapid application development environment that researchers may use to create targeted workflows customized for specific research projects. The Algorithm Validation Tools project will provide a set of tools and data structures that will be used to capture measurement information and associated needed to allow a gold standard to be defined for the given database against which change analysis algorithms can be tested. Through these and future efforts, the caBIG In Vivo Imaging Workspace Software SIG endeavors to advance imaging informatics and provide new open-source software tools to advance cancer research.
A Secure and Efficient Scalable Secret Image Sharing Scheme with Flexible Shadow Sizes
Xie, Dong; Li, Lixiang; Peng, Haipeng; Yang, Yixian
2017-01-01
In a general (k, n) scalable secret image sharing (SSIS) scheme, the secret image is shared by n participants and any k or more than k participants have the ability to reconstruct it. The scalability means that the amount of information in the reconstructed image scales in proportion to the number of the participants. In most existing SSIS schemes, the size of each image shadow is relatively large and the dealer does not has a flexible control strategy to adjust it to meet the demand of differen applications. Besides, almost all existing SSIS schemes are not applicable under noise circumstances. To address these deficiencies, in this paper we present a novel SSIS scheme based on a brand-new technique, called compressed sensing, which has been widely used in many fields such as image processing, wireless communication and medical imaging. Our scheme has the property of flexibility, which means that the dealer can achieve a compromise between the size of each shadow and the quality of the reconstructed image. In addition, our scheme has many other advantages, including smooth scalability, noise-resilient capability, and high security. The experimental results and the comparison with similar works demonstrate the feasibility and superiority of our scheme. PMID:28072851
Parallel algorithm of real-time infrared image restoration based on total variation theory
NASA Astrophysics Data System (ADS)
Zhu, Ran; Li, Miao; Long, Yunli; Zeng, Yaoyuan; An, Wei
2015-10-01
Image restoration is a necessary preprocessing step for infrared remote sensing applications. Traditional methods allow us to remove the noise but penalize too much the gradients corresponding to edges. Image restoration techniques based on variational approaches can solve this over-smoothing problem for the merits of their well-defined mathematical modeling of the restore procedure. The total variation (TV) of infrared image is introduced as a L1 regularization term added to the objective energy functional. It converts the restoration process to an optimization problem of functional involving a fidelity term to the image data plus a regularization term. Infrared image restoration technology with TV-L1 model exploits the remote sensing data obtained sufficiently and preserves information at edges caused by clouds. Numerical implementation algorithm is presented in detail. Analysis indicates that the structure of this algorithm can be easily implemented in parallelization. Therefore a parallel implementation of the TV-L1 filter based on multicore architecture with shared memory is proposed for infrared real-time remote sensing systems. Massive computation of image data is performed in parallel by cooperating threads running simultaneously on multiple cores. Several groups of synthetic infrared image data are used to validate the feasibility and effectiveness of the proposed parallel algorithm. Quantitative analysis of measuring the restored image quality compared to input image is presented. Experiment results show that the TV-L1 filter can restore the varying background image reasonably, and that its performance can achieve the requirement of real-time image processing.
Integrating DICOM structure reporting (SR) into the medical imaging informatics data grid
NASA Astrophysics Data System (ADS)
Lee, Jasper; Le, Anh; Liu, Brent
2008-03-01
The Medical Imaging Informatics (MI2) Data Grid developed at the USC Image Processing and Informatics Laboratory enables medical images to be shared securely between multiple imaging centers. Current applications include an imaging-based clinical trial setting where multiple field sites perform image acquisition and a centralized radiology core performs image analysis, often using computer-aided diagnosis tools (CAD) that generate a DICOM-SR to report their findings and measurements. As more and more CAD tools are being developed in the radiology field, the generated DICOM Structure Reports (SR) holding key radiological findings and measurements that are not part of the DICOM image need to be integrated into the existing Medical Imaging Informatics Data Grid with the corresponding imaging studies. We will discuss the significance and method involved in adapting DICOM-SR into the Medical Imaging Informatics Data Grid. The result is a MI2 Data Grid repository from which users can send and receive DICOM-SR objects based on the imaging-based clinical trial application. The services required to extract and categorize information from the structured reports will be discussed, and the workflow to store and retrieve a DICOM-SR file into the existing MI2 Data Grid will be shown.
On the Prediction of Flickr Image Popularity by Analyzing Heterogeneous Social Sensory Data
Aloufi, Samah; Zhu, Shiai; El Saddik, Abdulmotaleb
2017-01-01
The increase in the popularity of social media has shattered the gap between the physical and virtual worlds. The content generated by people or social sensors on social media provides information about users and their living surroundings, which allows us to access a user’s preferences, opinions, and interactions. This provides an opportunity for us to understand human behavior and enhance the services provided for both the real and virtual worlds. In this paper, we will focus on the popularity prediction of social images on Flickr, a popular social photo-sharing site, and promote the research on utilizing social sensory data in the context of assisting people to improve their life on the Web. Social data are different from the data collected from physical sensors; in the fact that they exhibit special characteristics that pose new challenges. In addition to their huge quantity, social data are noisy, unstructured, and heterogeneous. Moreover, they involve human semantics and contextual data that require analysis and interpretation based on human behavior. Accordingly, we address the problem of popularity prediction for an image by exploiting three main factors that are important for making an image popular. In particular, we investigate the impact of the image’s visual content, where the semantic and sentiment information extracted from the image show an impact on its popularity, as well as the textual information associated with the image, which has a fundamental role in boosting the visibility of the image in the keyword search results. Additionally, we explore social context, such as an image owner’s popularity and how it positively influences the image popularity. With a comprehensive study on the effect of the three aspects, we further propose to jointly consider the heterogeneous social sensory data. Experimental results obtained from real-world data demonstrate that the three factors utilized complement each other in obtaining promising results in the prediction of image popularity on social photo-sharing site. PMID:28335498
A simple tool for neuroimaging data sharing
Haselgrove, Christian; Poline, Jean-Baptiste; Kennedy, David N.
2014-01-01
Data sharing is becoming increasingly common, but despite encouragement and facilitation by funding agencies, journals, and some research efforts, most neuroimaging data acquired today is still not shared due to political, financial, social, and technical barriers to sharing data that remain. In particular, technical solutions are few for researchers that are not a part of larger efforts with dedicated sharing infrastructures, and social barriers such as the time commitment required to share can keep data from becoming publicly available. We present a system for sharing neuroimaging data, designed to be simple to use and to provide benefit to the data provider. The system consists of a server at the International Neuroinformatics Coordinating Facility (INCF) and user tools for uploading data to the server. The primary design principle for the user tools is ease of use: the user identifies a directory containing Digital Imaging and Communications in Medicine (DICOM) data, provides their INCF Portal authentication, and provides identifiers for the subject and imaging session. The user tool anonymizes the data and sends it to the server. The server then runs quality control routines on the data, and the data and the quality control reports are made public. The user retains control of the data and may change the sharing policy as they need. The result is that in a few minutes of the user’s time, DICOM data can be anonymized and made publicly available, and an initial quality control assessment can be performed on the data. The system is currently functional, and user tools and access to the public image database are available at http://xnat.incf.org/. PMID:24904398
Cadman, Brie; Malone, Ruth E.
2016-01-01
Tobacco companies rely on corporate social responsibility (CSR) initiatives to improve their public image and advance their political objectives, which include thwarting or undermining tobacco control policies. For these reasons, implementation guidelines for the World Health Organization’s Framework Convention on Tobacco Control (FCTC) recommend curtailing or prohibiting tobacco industry CSR. To understand how and where major tobacco companies focus their CSR resources, we explored CSR-related content on 4 US and 4 multinational tobacco company websites in February 2014. The websites described a range of CSR-related activities, many common across all companies, and no programs were unique to a particular company. The websites mentioned CSR activities in 58 countries, representing nearly every region of the world. Tobacco companies appear to have a shared vision about what constitutes CSR, due perhaps to shared vulnerabilities. Most countries that host tobacco company CSR programs are parties to the FCTC, highlighting the need for full implementation of the treaty, and for funding to monitor CSR activity, replace industry philanthropy, and enforce existing bans. PMID:27261411
McDaniel, Patricia A; Cadman, Brie; Malone, Ruth E
2016-08-01
Tobacco companies rely on corporate social responsibility (CSR) initiatives to improve their public image and advance their political objectives, which include thwarting or undermining tobacco control policies. For these reasons, implementation guidelines for the World Health Organization's Framework Convention on Tobacco Control (FCTC) recommend curtailing or prohibiting tobacco industry CSR. To understand how and where major tobacco companies focus their CSR resources, we explored CSR-related content on 4 US and 4 multinational tobacco company websites in February 2014. The websites described a range of CSR-related activities, many common across all companies, and no programs were unique to a particular company. The websites mentioned CSR activities in 58 countries, representing nearly every region of the world. Tobacco companies appear to have a shared vision about what constitutes CSR, due perhaps to shared vulnerabilities. Most countries that host tobacco company CSR programs are parties to the FCTC, highlighting the need for full implementation of the treaty, and for funding to monitor CSR activity, replace industry philanthropy, and enforce existing bans. Copyright © 2016 Elsevier Inc. All rights reserved.
Liu, Da; Xu, Ming; Niu, Dongxiao; Wang, Shoukai; Liang, Sai
2016-01-01
Traditional forecasting models fit a function approximation from dependent invariables to independent variables. However, they usually get into trouble when date are presented in various formats, such as text, voice and image. This study proposes a novel image-encoded forecasting method that input and output binary digital two-dimensional (2D) images are transformed from decimal data. Omitting any data analysis or cleansing steps for simplicity, all raw variables were selected and converted to binary digital images as the input of a deep learning model, convolutional neural network (CNN). Using shared weights, pooling and multiple-layer back-propagation techniques, the CNN was adopted to locate the nexus among variations in local binary digital images. Due to the computing capability that was originally developed for binary digital bitmap manipulation, this model has significant potential for forecasting with vast volume of data. The model was validated by a power loads predicting dataset from the Global Energy Forecasting Competition 2012.
Xu, Ming; Niu, Dongxiao; Wang, Shoukai; Liang, Sai
2016-01-01
Traditional forecasting models fit a function approximation from dependent invariables to independent variables. However, they usually get into trouble when date are presented in various formats, such as text, voice and image. This study proposes a novel image-encoded forecasting method that input and output binary digital two-dimensional (2D) images are transformed from decimal data. Omitting any data analysis or cleansing steps for simplicity, all raw variables were selected and converted to binary digital images as the input of a deep learning model, convolutional neural network (CNN). Using shared weights, pooling and multiple-layer back-propagation techniques, the CNN was adopted to locate the nexus among variations in local binary digital images. Due to the computing capability that was originally developed for binary digital bitmap manipulation, this model has significant potential for forecasting with vast volume of data. The model was validated by a power loads predicting dataset from the Global Energy Forecasting Competition 2012. PMID:27281032
Atlas Toolkit: Fast registration of 3D morphological datasets in the absence of landmarks
Grocott, Timothy; Thomas, Paul; Münsterberg, Andrea E.
2016-01-01
Image registration is a gateway technology for Developmental Systems Biology, enabling computational analysis of related datasets within a shared coordinate system. Many registration tools rely on landmarks to ensure that datasets are correctly aligned; yet suitable landmarks are not present in many datasets. Atlas Toolkit is a Fiji/ImageJ plugin collection offering elastic group-wise registration of 3D morphological datasets, guided by segmentation of the interesting morphology. We demonstrate the method by combinatorial mapping of cell signalling events in the developing eyes of chick embryos, and use the integrated datasets to predictively enumerate Gene Regulatory Network states. PMID:26864723
Atlas Toolkit: Fast registration of 3D morphological datasets in the absence of landmarks.
Grocott, Timothy; Thomas, Paul; Münsterberg, Andrea E
2016-02-11
Image registration is a gateway technology for Developmental Systems Biology, enabling computational analysis of related datasets within a shared coordinate system. Many registration tools rely on landmarks to ensure that datasets are correctly aligned; yet suitable landmarks are not present in many datasets. Atlas Toolkit is a Fiji/ImageJ plugin collection offering elastic group-wise registration of 3D morphological datasets, guided by segmentation of the interesting morphology. We demonstrate the method by combinatorial mapping of cell signalling events in the developing eyes of chick embryos, and use the integrated datasets to predictively enumerate Gene Regulatory Network states.
Ma_Miss Experiment: miniaturized imaging spectrometer for subsurface studies
NASA Astrophysics Data System (ADS)
Coradini, A.; Ammannito, E.; Boccaccini, A.; de Sanctis, M. C.; di Iorio, T.; Battistelli, E.; Capanni, A.
2011-10-01
The study of the Martian subsurface will provide important constraints on the nature, timing and duration of alteration and sedimentation processes on Mars, as well as on the complex interactions between the surface and the atmosphere. A Drilling system, coupled with an in situ analysis package, is installed on the Exomars-Pasteur Rover to perform in situ investigations up to 2m in the Mars soil. Ma_Miss (Mars Multispectral Imager for Subsurface Studies) is a spectrometer devoted to observe the lateral wall of the borehole generated by the Drilling system. The instrument is fully integrated with the Drill and shares its structure and electronics.
Norman, Luke J; Carlisi, Christina O; Christakou, Anastasia; Murphy, Clodagh M; Chantiluke, Kaylita; Giampietro, Vincent; Simmons, Andrew; Brammer, Michael; Mataix-Cols, David; Rubia, Katya
2018-03-24
The aim of the current paper is to provide the first comparison of computational mechanisms and neurofunctional substrates in adolescents with attention-deficit/hyperactivity disorder (ADHD) and adolescents with obsessive-compulsive disorder (OCD) during decision making under ambiguity. Sixteen boys with ADHD, 20 boys with OCD, and 20 matched control subjects (12-18 years of age) completed a functional magnetic resonance imaging version of the Iowa Gambling Task. Brain activation was compared between groups using three-way analysis of covariance. Hierarchical Bayesian analysis was used to compare computational modeling parameters between groups. Patient groups shared reduced choice consistency and relied less on reinforcement learning during decision making relative to control subjects, while adolescents with ADHD alone demonstrated increased reward sensitivity. During advantageous choices, both disorders shared underactivation in ventral striatum, while OCD patients showed disorder-specific underactivation in the ventromedial orbitofrontal cortex. During outcome evaluation, shared underactivation to losses in patients relative to control subjects was found in the medial prefrontal cortex and shared underactivation to wins was found in the left putamen/caudate. ADHD boys showed disorder-specific dysfunction in the right putamen/caudate, which was activated more to losses in patients with ADHD but more to wins in control subjects. The findings suggest shared deficits in using learned reward expectancies to guide decision making, as well as shared dysfunction in medio-fronto-striato-limbic brain regions. However, findings of unique dysfunction in the ventromedial orbitofrontal cortex in OCD and in the right putamen in ADHD indicate additional, disorder-specific abnormalities and extend similar findings from inhibitory control tasks in the disorders to the domain of decision making under ambiguity. Copyright © 2018 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Prodanovic, M.; Esteva, M.; Ketcham, R. A.; Hanlon, M.; Pettengill, M.; Ranganath, A.; Venkatesh, A.
2016-12-01
Due to advances in imaging modalities such as X-ray microtomography and scattered electron microscopy, 2D and 3D imaged datasets of rock microstructure on nanometer to centimeter length scale allow investigation of nonlinear flow and mechanical phenomena using numerical approaches. This in turn produces various upscaled parameters required by subsurface flow and deformation simulators. However, a single research group typically specializes in an imaging modality and/or related modeling on a single length scale, and lack of data-sharing infrastructure makes it difficult to integrate different length scales. We developed a sustainable, open and easy-to-use repository called the Digital Rocks Portal (http://www.digitalrocksportal.org), that (1) organizes images and related experimental measurements of different porous materials, (2) improves access to them for a wider community of geosciences or engineering researchers not necessarily trained in computer science or data analysis. Our objective is to enable scientific inquiry and engineering decisions founded on a data-driven basis. We show how the data loaded in the portal can be documented, referenced in publications via digital object identifiers, visualize and linked to other repositories. We then show preliminary results on integrating remote parallel visualization and flow simulation workflow with the pore structures currently stored in the repository. We finally discuss the issues of collecting correct metadata, data discoverability and repository sustainability. This is the first repository for this particular data, but is part of the wider ecosystem of geoscience data and model cyber-infrastructure called "Earthcube" (http://earthcube.org/) sponsored by National Science Foundation. For data sustainability and continuous access, the portal is implemented within the reliable, 24/7 maintained High Performance Computing Infrastructure supported by the Texas Advanced Computing Center (TACC) at the University of Texas at Austin. Long-term storage is provided through the University of Texas System Research Cyber-infrastructure initiative.
Cloud-based image sharing network for collaborative imaging diagnosis and consultation
NASA Astrophysics Data System (ADS)
Yang, Yuanyuan; Gu, Yiping; Wang, Mingqing; Sun, Jianyong; Li, Ming; Zhang, Weiqiang; Zhang, Jianguo
2018-03-01
In this presentation, we presented a new approach to design cloud-based image sharing network for collaborative imaging diagnosis and consultation through Internet, which can enable radiologists, specialists and physicians locating in different sites collaboratively and interactively to do imaging diagnosis or consultation for difficult or emergency cases. The designed network combined a regional RIS, grid-based image distribution management, an integrated video conferencing system and multi-platform interactive image display devices together with secured messaging and data communication. There are three kinds of components in the network: edge server, grid-based imaging documents registry and repository, and multi-platform display devices. This network has been deployed in a public cloud platform of Alibaba through Internet since March 2017 and used for small lung nodule or early staging lung cancer diagnosis services between Radiology departments of Huadong hospital in Shanghai and the First Hospital of Jiaxing in Zhejiang Province.
Chuang, Tzu-Chao; Huang, Hsuan-Hung; Chang, Hing-Chiu; Wu, Ming-Ting
2014-06-01
To achieve better spatial and temporal resolution of dynamic contrast-enhanced MR imaging, the concept of k-space data sharing, or view sharing, can be implemented for PROPELLER acquisition. As found in other view-sharing methods, the loss of high-resolution dynamics is possible for view-sharing PROPELLER (VS-Prop) due to the temporal smoothing effect. The degradation can be more severe when a narrow blade with less phase encoding steps is chosen in the acquisition for higher frame rate. In this study, an iterative algorithm termed pixel-based optimal blade selection (POBS) is proposed to allow spatially dependent selection of the rotating blades, to generate high-resolution dynamic images with minimal reconstruction artifacts. In the reconstruction of VS-Prop, the central k-space which dominates the image contrast is only provided by the target blade with the peripheral k-space contributed by a minimal number of consecutive rotating blades. To reduce the reconstruction artifacts, the set of neighboring blades exhibiting the closest image contrast with the target blade is picked by POBS algorithm. Numerical simulations and phantom experiments were conducted in this study to investigate the dynamic response and spatial profiles of images generated using our proposed method. In addition, dynamic contrast-enhanced cardiovascular imaging of healthy subjects was performed to demonstrate the feasibility and advantages. The simulation results show that POBS VS-Prop can provide timely dynamic response to rapid signal change, especially for a small region of interest or with the use of narrow blades. The POBS algorithm also demonstrates its capability to capture nonsimultaneous signal changes over the entire FOV. In addition, both phantom and in vivo experiments show that the temporal smoothing effect can be avoided by means of POBS, leading to higher wash-in slope of contrast enhancement after the bolus injection. With the satisfactory reconstruction quality provided by the POBS algorithm, VS-Prop acquisition technique may find useful clinical applications in DCE MR imaging studies where both spatial and temporal resolutions play important roles.
I'll take that to go: Big data bags and minimal identifiers for exchange of large, complex datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chard, Kyle; D'Arcy, Mike; Heavner, Benjamin D.
Big data workflows often require the assembly and exchange of complex, multi-element datasets. For example, in biomedical applications, the input to an analytic pipeline can be a dataset consisting thousands of images and genome sequences assembled from diverse repositories, requiring a description of the contents of the dataset in a concise and unambiguous form. Typical approaches to creating datasets for big data workflows assume that all data reside in a single location, requiring costly data marshaling and permitting errors of omission and commission because dataset members are not explicitly specified. We address these issues by proposing simple methods and toolsmore » for assembling, sharing, and analyzing large and complex datasets that scientists can easily integrate into their daily workflows. These tools combine a simple and robust method for describing data collections (BDBags), data descriptions (Research Objects), and simple persistent identifiers (Minids) to create a powerful ecosystem of tools and services for big data analysis and sharing. We present these tools and use biomedical case studies to illustrate their use for the rapid assembly, sharing, and analysis of large datasets.« less
Gutman, David A; Khalilia, Mohammed; Lee, Sanghoon; Nalisnik, Michael; Mullen, Zach; Beezley, Jonathan; Chittajallu, Deepak R; Manthey, David; Cooper, Lee A D
2017-11-01
Tissue-based cancer studies can generate large amounts of histology data in the form of glass slides. These slides contain important diagnostic, prognostic, and biological information and can be digitized into expansive and high-resolution whole-slide images using slide-scanning devices. Effectively utilizing digital pathology data in cancer research requires the ability to manage, visualize, share, and perform quantitative analysis on these large amounts of image data, tasks that are often complex and difficult for investigators with the current state of commercial digital pathology software. In this article, we describe the Digital Slide Archive (DSA), an open-source web-based platform for digital pathology. DSA allows investigators to manage large collections of histologic images and integrate them with clinical and genomic metadata. The open-source model enables DSA to be extended to provide additional capabilities. Cancer Res; 77(21); e75-78. ©2017 AACR . ©2017 American Association for Cancer Research.
[Research on fast implementation method of image Gaussian RBF interpolation based on CUDA].
Chen, Hao; Yu, Haizhong
2014-04-01
Image interpolation is often required during medical image processing and analysis. Although interpolation method based on Gaussian radial basis function (GRBF) has high precision, the long calculation time still limits its application in field of image interpolation. To overcome this problem, a method of two-dimensional and three-dimensional medical image GRBF interpolation based on computing unified device architecture (CUDA) is proposed in this paper. According to single instruction multiple threads (SIMT) executive model of CUDA, various optimizing measures such as coalesced access and shared memory are adopted in this study. To eliminate the edge distortion of image interpolation, natural suture algorithm is utilized in overlapping regions while adopting data space strategy of separating 2D images into blocks or dividing 3D images into sub-volumes. Keeping a high interpolation precision, the 2D and 3D medical image GRBF interpolation achieved great acceleration in each basic computing step. The experiments showed that the operative efficiency of image GRBF interpolation based on CUDA platform was obviously improved compared with CPU calculation. The present method is of a considerable reference value in the application field of image interpolation.
Reversible watermarking for knowledge digest embedding and reliability control in medical images.
Coatrieux, Gouenou; Le Guillou, Clara; Cauvin, Jean-Michel; Roux, Christian
2009-03-01
To improve medical image sharing in applications such as e-learning or remote diagnosis aid, we propose to make the image more usable by watermarking it with a digest of its associated knowledge. The aim of such a knowledge digest (KD) is for it to be used for retrieving similar images with either the same findings or differential diagnoses. It summarizes the symbolic descriptions of the image, the symbolic descriptions of the findings semiology, and the similarity rules that contribute to balancing the importance of previous descriptors when comparing images. Instead of modifying the image file format by adding some extra header information, watermarking is used to embed the KD in the pixel gray-level values of the corresponding images. When shared through open networks, watermarking also helps to convey reliability proofs (integrity and authenticity) of an image and its KD. The interest of these new image functionalities is illustrated in the updating of the distributed users' databases within the framework of an e-learning application demonstrator of endoscopic semiology.
Offering to Share: How to Put Heads Together in Autism Neuroimaging
ERIC Educational Resources Information Center
Belmonte, Matthew K.; Mazziotta, John C.; Minshew, Nancy J.; Evans, Alan C.; Courchesne, Eric; Dager, Stephen R.; Bookheimer, Susan Y.; Aylward, Elizabeth H.; Amaral, David G.; Cantor, Rita M.; Chugani, Diane C.; Dale, Anders M.; Davatzikos, Christos; Gerig, Guido; Herbert, Martha R.; Lainhart, Janet E.; Murphy, Declan G.; Piven, Joseph; Reiss, Allan L.; Schultz, Robert T.; Zeffiro, Thomas A.; Levi-Pearl, Susan; Lajonchere, Clara; Colamarino, Sophia A.
2008-01-01
Data sharing in autism neuroimaging presents scientific, technical, and social obstacles. We outline the desiderata for a data-sharing scheme that combines imaging with other measures of phenotype and with genetics, defines requirements for comparability of derived data and recommendations for raw data, outlines a core protocol including…
Greco, Giampaolo; Patel, Anand S.; Lewis, Sara C.; Shi, Wei; Rasul, Rehana; Torosyan, Mary; Erickson, Bradley J.; Hiremath, Atheeth; Moskowitz, Alan J.; Tellis, Wyatt M.; Siegel, Eliot L.; Arenson, Ronald L.; Mendelson, David S.
2015-01-01
Rationale and Objectives Inefficient transfer of personal health records among providers negatively impacts quality of health care and increases cost. This multicenter study evaluates the implementation of the first Internet-based image-sharing system that gives patients ownership and control of their imaging exams, including assessment of patient satisfaction. Materials and Methods Patients receiving any medical imaging exams in four academic centers were eligible to have images uploaded into an online, Internet-based personal health record. Satisfaction surveys were provided during recruitment with questions on ease of use, privacy and security, and timeliness of access to images. Responses were rated on a five-point scale and compared using logistic regression and McNemar's test. Results A total of 2562 patients enrolled from July 2012 to August 2013. The median number of imaging exams uploaded per patient was 5. Most commonly, exams were plain X-rays (34.7%), computed tomography (25.7%), and magnetic resonance imaging (16.1%). Of 502 (19.6%) patient surveys returned, 448 indicated the method of image sharing (Internet, compact discs [CDs], both, other). Nearly all patients (96.5%) responded favorably to having direct access to images, and 78% reported viewing their medical images independently. There was no difference between Internet and CD users in satisfaction with privacy and security and timeliness of access to medical images. A greater percentage of Internet users compared to CD users reported access without difficulty (88.3% vs. 77.5%, P < 0.0001). Conclusion A patient-directed, interoperable, Internet-based image-sharing system is feasible and surpasses the use of CDs with respect to accessibility of imaging exams while generating similar satisfaction with respect to privacy. PMID:26625706
Greco, Giampaolo; Patel, Anand S; Lewis, Sara C; Shi, Wei; Rasul, Rehana; Torosyan, Mary; Erickson, Bradley J; Hiremath, Atheeth; Moskowitz, Alan J; Tellis, Wyatt M; Siegel, Eliot L; Arenson, Ronald L; Mendelson, David S
2016-02-01
Inefficient transfer of personal health records among providers negatively impacts quality of health care and increases cost. This multicenter study evaluates the implementation of the first Internet-based image-sharing system that gives patients ownership and control of their imaging exams, including assessment of patient satisfaction. Patients receiving any medical imaging exams in four academic centers were eligible to have images uploaded into an online, Internet-based personal health record. Satisfaction surveys were provided during recruitment with questions on ease of use, privacy and security, and timeliness of access to images. Responses were rated on a five-point scale and compared using logistic regression and McNemar's test. A total of 2562 patients enrolled from July 2012 to August 2013. The median number of imaging exams uploaded per patient was 5. Most commonly, exams were plain X-rays (34.7%), computed tomography (25.7%), and magnetic resonance imaging (16.1%). Of 502 (19.6%) patient surveys returned, 448 indicated the method of image sharing (Internet, compact discs [CDs], both, other). Nearly all patients (96.5%) responded favorably to having direct access to images, and 78% reported viewing their medical images independently. There was no difference between Internet and CD users in satisfaction with privacy and security and timeliness of access to medical images. A greater percentage of Internet users compared to CD users reported access without difficulty (88.3% vs. 77.5%, P < 0.0001). A patient-directed, interoperable, Internet-based image-sharing system is feasible and surpasses the use of CDs with respect to accessibility of imaging exams while generating similar satisfaction with respect to privacy. Copyright © 2015 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
GIFT-Cloud: A data sharing and collaboration platform for medical imaging research.
Doel, Tom; Shakir, Dzhoshkun I; Pratt, Rosalind; Aertsen, Michael; Moggridge, James; Bellon, Erwin; David, Anna L; Deprest, Jan; Vercauteren, Tom; Ourselin, Sébastien
2017-02-01
Clinical imaging data are essential for developing research software for computer-aided diagnosis, treatment planning and image-guided surgery, yet existing systems are poorly suited for data sharing between healthcare and academia: research systems rarely provide an integrated approach for data exchange with clinicians; hospital systems are focused towards clinical patient care with limited access for external researchers; and safe haven environments are not well suited to algorithm development. We have established GIFT-Cloud, a data and medical image sharing platform, to meet the needs of GIFT-Surg, an international research collaboration that is developing novel imaging methods for fetal surgery. GIFT-Cloud also has general applicability to other areas of imaging research. GIFT-Cloud builds upon well-established cross-platform technologies. The Server provides secure anonymised data storage, direct web-based data access and a REST API for integrating external software. The Uploader provides automated on-site anonymisation, encryption and data upload. Gateways provide a seamless process for uploading medical data from clinical systems to the research server. GIFT-Cloud has been implemented in a multi-centre study for fetal medicine research. We present a case study of placental segmentation for pre-operative surgical planning, showing how GIFT-Cloud underpins the research and integrates with the clinical workflow. GIFT-Cloud simplifies the transfer of imaging data from clinical to research institutions, facilitating the development and validation of medical research software and the sharing of results back to the clinical partners. GIFT-Cloud supports collaboration between multiple healthcare and research institutions while satisfying the demands of patient confidentiality, data security and data ownership. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
A pathologist-designed imaging system for anatomic pathology signout, teaching, and research.
Schubert, E; Gross, W; Siderits, R H; Deckenbaugh, L; He, F; Becich, M J
1994-11-01
Pathology images are derived from gross surgical specimens, light microscopy, immunofluorescence, electron microscopy, molecular diagnostic gels, flow cytometry, image analysis data, and clinical laboratory data in graphic form. We have implemented a network of desktop personal computers (PCs) that allow us to easily capture, store, and retrieve gross and microscopic, anatomic, and research pathology images. System architecture involves multiple image acquisition and retrieval sites and a central file server for storage. The digitized images are conveyed via a local area network to and from image capture or display stations. Acquisition sites consist of a high-resolution camera connected to a frame grabber card in a 486-type personal computer, equipped with 16 MB (Table 1) RAM, a 1.05-gigabyte hard drive, and a 32-bit ethernet card for access to our anatomic pathology reporting system. We have designed a push-button workstation for acquiring and indexing images that does not significantly interfere with surgical pathology sign-out. Advantages of the system include the following: (1) Improving patient care: the availability of gross images at time of microscopic sign-out, verification of recurrence of malignancy from archived images, monitoring of bone marrow engraftment and immunosuppressive intervention after bone marrow/solid organ transplantation on repeat biopsies, and ability to seek instantaneous consultation with any pathologist on the network; (2) enhancing the teaching environment: building a digital surgical pathology atlas, improving the availability of images for conference support, and sharing cases across the network; (3) enhancing research: case study compilation, metastudy analysis, and availability of digitized images for quantitative analysis and permanent/reusable image records for archival study; and (4) other practical and economic considerations: storing case requisition images and hand-drawn diagrams deters the spread of gross room contaminants and results in considerable cost savings in photographic media for conferences, improved quality assurance by porting control stains across the network, and a multiplicity of other advantages that enhance image and information management in pathology.
Bao, Shunxing; Weitendorf, Frederick D.; Plassard, Andrew J.; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A.
2016-01-01
The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., “short” processing times and/or “large” datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply “large scale” processing transitions into “big data” and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and non-relevant for medical imaging. PMID:28736473
International Lens Design Conference, Monterey, CA, June 11-14, 1990, Proceedings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawrence, G.N.
1990-01-01
The present conference on lens design encompasses physical and geometrical optics, diffractive optics, the optimization of optical design, software packages, ray tracing, the use of artificial intelligence, the achromatization of materials, zoom optics, microoptics and GRIN lenses, and IR lens design. Specific issues addressed include diffraction-performance calculations in lens design, the optimization of the optical transfer function, a rank-down method for automatic lens design, applications of quadric surfaces, the correction of aberrations by using HOEs in UV and visible imaging systems, and an all-refractive telescope for intersatellite communications. Also addressed are automation techniques for optics manufacturing, all-reflective phased-array imaging telescopes,more » the thermal aberration analysis of a Nd:YAG laser, the analysis of illumination systems, athermalized FLIR optics, and the design of array systems using shared symmetry.« less
Privacy-preserving photo sharing based on a public key infrastructure
NASA Astrophysics Data System (ADS)
Yuan, Lin; McNally, David; Küpçü, Alptekin; Ebrahimi, Touradj
2015-09-01
A significant number of pictures are posted to social media sites or exchanged through instant messaging and cloud-based sharing services. Most social media services offer a range of access control mechanisms to protect users privacy. As it is not in the best interest of many such services if their users restrict access to their shared pictures, most services keep users' photos unprotected which makes them available to all insiders. This paper presents an architecture for a privacy-preserving photo sharing based on an image scrambling scheme and a public key infrastructure. A secure JPEG scrambling is applied to protect regional visual information in photos. Protected images are still compatible with JPEG coding and therefore can be viewed by any one on any device. However, only those who are granted secret keys will be able to descramble the photos and view their original versions. The proposed architecture applies an attribute-based encryption along with conventional public key cryptography, to achieve secure transmission of secret keys and a fine-grained control over who may view shared photos. In addition, we demonstrate the practical feasibility of the proposed photo sharing architecture with a prototype mobile application, ProShare, which is built based on iOS platform.
An Ecometric Study of Recent Microfossils using High-throughput Imaging
NASA Astrophysics Data System (ADS)
Elder, L. E.; Hull, P. M.; Hsiang, A. Y.; Kahanamoku, S.
2016-02-01
The era of Big Data has ushered in the potential to collect population level information in a manageable time frame. Taxon-free morphological trait analysis, referred to as ecometrics, can be used to examine and compare ecological dynamics between communities with entirely different species compositions. Until recently population level studies of morphology were difficult because of the time intensive task of collecting measurements. To overcome this, we implemented advances in imaging technology and created software to automate measurements. This high-throughput set of methods collects assemblage-scale data, with methods tuned to foraminiferal samples (e.g., light objects on a dark background). Methods include serial focused dark-field microscopy, custom software (Automorph) to batch process images, extract 2D and 3D shape parameters and frames, and implement landmark-free geometric morphometric analyses. Informatics pipelines were created to store, catalog and share images through the Yale Peabody Museum(YPM; peabody.yale.edu). We openly share software and images to enhance future data discovery. In less than a year we have generated over 25TB of high resolution semi 3D images for this initial study. Here, we take the first step towards developing ecometric approaches for open ocean microfossil communities with a calibration study of community shape in recent sediments. We will present an overview of the `shape' of modern planktonic foraminiferal communities from 25 Atlantic core top samples (23 sites in the North and Equatorial Atlantic; 2 sites in the South Atlantic). In total, more than 100,000 microfossils and fragments were imaged from these sites' sediment cores, an unprecedented morphometric sample set. Correlates of community shape, including diversity, temperature, and latitude, will be discussed. These methods have also been applied to images of limpets and fish teeth to date, and have the potential to be used on modern taxa to extract meaningful information on community responses to changing climate.
Das, Abhiram; Schneider, Hannah; Burridge, James; Ascanio, Ana Karine Martinez; Wojciechowski, Tobias; Topp, Christopher N; Lynch, Jonathan P; Weitz, Joshua S; Bucksch, Alexander
2015-01-01
Plant root systems are key drivers of plant function and yield. They are also under-explored targets to meet global food and energy demands. Many new technologies have been developed to characterize crop root system architecture (CRSA). These technologies have the potential to accelerate the progress in understanding the genetic control and environmental response of CRSA. Putting this potential into practice requires new methods and algorithms to analyze CRSA in digital images. Most prior approaches have solely focused on the estimation of root traits from images, yet no integrated platform exists that allows easy and intuitive access to trait extraction and analysis methods from images combined with storage solutions linked to metadata. Automated high-throughput phenotyping methods are increasingly used in laboratory-based efforts to link plant genotype with phenotype, whereas similar field-based studies remain predominantly manual low-throughput. Here, we present an open-source phenomics platform "DIRT", as a means to integrate scalable supercomputing architectures into field experiments and analysis pipelines. DIRT is an online platform that enables researchers to store images of plant roots, measure dicot and monocot root traits under field conditions, and share data and results within collaborative teams and the broader community. The DIRT platform seamlessly connects end-users with large-scale compute "commons" enabling the estimation and analysis of root phenotypes from field experiments of unprecedented size. DIRT is an automated high-throughput computing and collaboration platform for field based crop root phenomics. The platform is accessible at http://www.dirt.iplantcollaborative.org/ and hosted on the iPlant cyber-infrastructure using high-throughput grid computing resources of the Texas Advanced Computing Center (TACC). DIRT is a high volume central depository and high-throughput RSA trait computation platform for plant scientists working on crop roots. It enables scientists to store, manage and share crop root images with metadata and compute RSA traits from thousands of images in parallel. It makes high-throughput RSA trait computation available to the community with just a few button clicks. As such it enables plant scientists to spend more time on science rather than on technology. All stored and computed data is easily accessible to the public and broader scientific community. We hope that easy data accessibility will attract new tool developers and spur creative data usage that may even be applied to other fields of science.
1992-05-01
ocean color for retrieving ocean k(490) values are examined. The validation of the optical database from the satellite is accessed through comparison...for sharing results of this validation study. We wish to thank J. Mueller for helpful discussions in optics and satellite processing and for sharing his...of these data products are displayable as 512 x 512 8-bit image maps compatible with the PC-SeaPak image format. Valid data ranges are from 1 to 255
Learning Category-Specific Dictionary and Shared Dictionary for Fine-Grained Image Categorization.
Gao, Shenghua; Tsang, Ivor Wai-Hung; Ma, Yi
2014-02-01
This paper targets fine-grained image categorization by learning a category-specific dictionary for each category and a shared dictionary for all the categories. Such category-specific dictionaries encode subtle visual differences among different categories, while the shared dictionary encodes common visual patterns among all the categories. To this end, we impose incoherence constraints among the different dictionaries in the objective of feature coding. In addition, to make the learnt dictionary stable, we also impose the constraint that each dictionary should be self-incoherent. Our proposed dictionary learning formulation not only applies to fine-grained classification, but also improves conventional basic-level object categorization and other tasks such as event recognition. Experimental results on five data sets show that our method can outperform the state-of-the-art fine-grained image categorization frameworks as well as sparse coding based dictionary learning frameworks. All these results demonstrate the effectiveness of our method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Fangzhen; Wang, Huanhuan; Raghothamachar, Balaji
A new method has been developed to determine the fault vectors associated with stacking faults in 4H-SiC from their stacking sequences observed on high resolution TEM images. This method, analogous to the Burgers circuit technique for determination of dislocation Burgers vector, involves determination of the vectors required in the projection of the perfect lattice to correct the deviated path constructed in the faulted material. Results for several different stacking faults were compared with fault vectors determined from X-ray topographic contrast analysis and were found to be consistent. This technique is expected to applicable to all structures comprising corner shared tetrahedra.
Influence of the active nucleus on the multiphase interstellar medium in NGC 1068
NASA Technical Reports Server (NTRS)
Bland-Hawthorn, Jonathan; Weisheit, Jon; Cecil, Gerald; Sokolowski, James
1993-01-01
The luminous spiral NGC 1068 has now been imaged from x-ray to radio wavelengths at comparably high resolution (approximately less than 5 in. FWHM). The bolometric luminosity of this well-known Seyfert is shared almost equally between the active nucleus and an extended 'starburst' disk. In an ongoing study, we are investigating the relative importance of the nucleus and the disk in powering the wide range of energetic activity observed throughout the galaxy. Our detailed analysis brings together a wealth of data: ROSAT HRI observations, VLA lambda lambda 6-20 cu cm and OVRO interferometry, lambda lambda 0.4-10.8 micron imaging, and Fabry-Perot spectrophotometry.
Enabling comparative effectiveness research with informatics: show me the data!
Safdar, Nabile M; Siegel, Eliot; Erickson, Bradley J; Nagy, Paul
2011-09-01
Both outcomes researchers and informaticians are concerned with information and data. As such, some of the central challenges to conducting successful comparative effectiveness research can be addressed with informatics solutions. Specific informatics solutions which address how data in comparative effectiveness research are enriched, stored, shared, and analyzed are reviewed. Imaging data can be made more quantitative, uniform, and structured for researchers through the use of lexicons and structured reporting. Secure and scalable storage of research data is enabled through data warehouses and cloud services. There are a number of national efforts to help researchers share research data and analysis tools. There is a diverse arsenal of informatics tools designed to meet the needs of comparative effective researchers. Copyright © 2011 AUR. Published by Elsevier Inc. All rights reserved.
Human Connectome Project Informatics: quality control, database services, and data visualization
Marcus, Daniel S.; Harms, Michael P.; Snyder, Abraham Z.; Jenkinson, Mark; Wilson, J Anthony; Glasser, Matthew F.; Barch, Deanna M.; Archie, Kevin A.; Burgess, Gregory C.; Ramaratnam, Mohana; Hodge, Michael; Horton, William; Herrick, Rick; Olsen, Timothy; McKay, Michael; House, Matthew; Hileman, Michael; Reid, Erin; Harwell, John; Coalson, Timothy; Schindler, Jon; Elam, Jennifer S.; Curtiss, Sandra W.; Van Essen, David C.
2013-01-01
The Human Connectome Project (HCP) has developed protocols, standard operating and quality control procedures, and a suite of informatics tools to enable high throughput data collection, data sharing, automated data processing and analysis, and data mining and visualization. Quality control procedures include methods to maintain data collection consistency over time, to measure head motion, and to establish quantitative modality-specific overall quality assessments. Database services developed as customizations of the XNAT imaging informatics platform support both internal daily operations and open access data sharing. The Connectome Workbench visualization environment enables user interaction with HCP data and is increasingly integrated with the HCP's database services. Here we describe the current state of these procedures and tools and their application in the ongoing HCP study. PMID:23707591
Efficient Access Control in Multimedia Social Networks
NASA Astrophysics Data System (ADS)
Sachan, Amit; Emmanuel, Sabu
Multimedia social networks (MMSNs) have provided a convenient way to share multimedia contents such as images, videos, blogs, etc. Contents shared by a person can be easily accessed by anybody else over the Internet. However, due to various privacy, security, and legal concerns people often want to selectively share the contents only with their friends, family, colleagues, etc. Access control mechanisms play an important role in this situation. With access control mechanisms one can decide the persons who can access a shared content and who cannot. But continuously growing content uploads and accesses, fine grained access control requirements (e.g. different access control parameters for different parts in a picture), and specific access control requirements for multimedia contents can make the time complexity of access control to be very large. So, it is important to study an efficient access control mechanism suitable for MMSNs. In this chapter we present an efficient bit-vector transform based access control mechanism for MMSNs. The proposed approach is also compatible with other requirements of MMSNs, such as access rights modification, content deletion, etc. Mathematical analysis and experimental results show the effectiveness and efficiency of our proposed approach.
Strategic business planning for internal medicine.
Ervin, F R
1996-07-01
The internal medicine generalist is at market risk with expansion of managed care. The cottage industry of Academic Departments of internal medicine should apply more business tools to the internal medicine business problem. A strength, weakness, opportunity, threat (SWOT) analysis demonstrates high vulnerability to the internal medicine generalist initiative. Recommitment to the professional values of internal medicine and enhanced focus on the master clinician as the competitive core competency of internal medicine will be necessary to retain image and market share.
Dinevski, Nikolaj; Sarnthein, Johannes; Vasella, Flavio; Fierstra, Jorn; Pangalu, Athina; Holzmann, David; Regli, Luca; Bozinov, Oliver
2017-07-01
To determine the rate of surgical-site infections (SSI) in neurosurgical procedures involving a shared-resource intraoperative magnetic resonance imaging (ioMRI) scanner at a single institution derived from a prospective clinical quality management database. All consecutive neurosurgical procedures that were performed with a high-field, 2-room ioMRI between April 2013 and June 2016 were included (N = 195; 109 craniotomies and 86 endoscopic transsphenoidal procedures). The incidence of SSIs within 3 months after surgery was assessed for both operative groups (craniotomies vs. transsphenoidal approach). Of the 109 craniotomies, 6 patients developed an SSI (5.5%, 95% confidence interval [CI] 1.2-9.8%), including 1 superficial SSI, 2 cases of bone flap osteitis, 1 intracranial abscess, and 2 cases of meningitis/ventriculitis. Wound revision surgery due to infection was necessary in 4 patients (4%). Of the 86 transsphenoidal skull base surgeries, 6 patients (7.0%, 95% CI 1.5-12.4%) developed an infection, including 2 non-central nervous system intranasal SSIs (3%) and 4 cases of meningitis (5%). Logistic regression analysis revealed that the likelihood of infection significantly decreased with the number of operations in the new operational setting (odds ratio 0.982, 95% CI 0.969-0.995, P = 0.008). The use of a shared-resource ioMRI in neurosurgery did not demonstrate increased rates of infection compared with the current available literature. The likelihood of infection decreased with the accumulating number of operations, underlining the importance of surgical staff training after the introduction of a shared-resource ioMRI. Copyright © 2017 Elsevier Inc. All rights reserved.
Concurrent Image Processing Executive (CIPE). Volume 1: Design overview
NASA Technical Reports Server (NTRS)
Lee, Meemong; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.
1990-01-01
The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are described. The target machine for this software is a JPL/Caltech Mark 3fp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules: user interface, host-resident executive, hypercube-resident executive, and application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube, a data management method which distributes, redistributes, and tracks data set information was implemented. The data management also allows data sharing among application programs. The CIPE software architecture provides a flexible environment for scientific analysis of complex remote sensing image data, such as planetary data and imaging spectrometry, utilizing state-of-the-art concurrent computation capabilities.
Cascaded systems analysis of charge sharing in cadmium telluride photon-counting x-ray detectors.
Tanguay, Jesse; Cunningham, Ian A
2018-05-01
Single-photon-counting (SPC) and spectroscopic x-ray detectors are under development in academic and industry laboratories for medical imaging applications. The spatial resolution of SPC and spectroscopic x-ray detectors is an important design criterion. The purpose of this article was to extend the cascaded systems approach to include a description of the spatial resolution of SPC and spectroscopic x-ray imaging detectors. A cascaded systems approach was used to model reabsorption of characteristic x rays, Coulomb repulsion, and diffusion in SPC and spectroscopic x-ray detectors. In addition to reabsorption, diffusion, and Coulomb repulsion, the model accounted for x-ray conversion to electron-hole (e-h) pairs, integration of e-h pairs in detector elements, electronic noise, and energy thresholding. The probability density function (PDF) describing the number of e-h pairs was propagated through each stage of the model and was used to derive new theoretical expressions for the large-area gain and modulation transfer function (MTF) of CdTe SPC x-ray detectors, and the energy bin sensitivity functions and MTFs of CdTe spectroscopic detectors. Theoretical predictions were compared with the results of MATLAB-based Monte Carlo (MC) simulations and published data. Comparisons were also made with the MTF of energy-integrating systems. Under general radiographic conditions, reabsorption, diffusion, and Coulomb repulsion together artificially inflate count rates by 20% to 50%. For thicker converters (e.g. 1000 μm) and larger detector elements (e.g. 500 μm pixel pitch) these processes result in modest inflation (i.e. ∼10%) in apparent count rates. Our theoretical and MC analyses predict that SPC MTFs will be degraded relative to those of energy-integrating systems for fluoroscopic, general radiographic, and CT imaging conditions. In most cases, this degradation is modest (i.e., ∼10% at the Nyquist frequency). However, for thicker converters, the SPC MTF can be degraded by up to 25% at the Nyquist frequency relative to EI systems. Additionally, unlike EI systems, the MTF of spectroscopic systems is strongly dependent on photon energy, which results in energy-bin-dependent spatial resolution in spectroscopic systems. The PDF-transfer approach to modeling signal transfer through SPC and spectroscopic x-ray imaging systems provides a framework for understanding system performance. Application of this approach demonstrated that charge sharing artificially inflates the SPC image signal and degrades the MTF of SPC and spectroscopic systems relative to energy-integrating systems. These results further motivate the need for anticharge-sharing circuits to mitigate the effects of charge sharing on SPC and spectroscopic x-ray image quality. © 2018 American Association of Physicists in Medicine.
A review of multivariate methods in brain imaging data fusion
NASA Astrophysics Data System (ADS)
Sui, Jing; Adali, Tülay; Li, Yi-Ou; Yang, Honghui; Calhoun, Vince D.
2010-03-01
On joint analysis of multi-task brain imaging data sets, a variety of multivariate methods have shown their strengths and been applied to achieve different purposes based on their respective assumptions. In this paper, we provide a comprehensive review on optimization assumptions of six data fusion models, including 1) four blind methods: joint independent component analysis (jICA), multimodal canonical correlation analysis (mCCA), CCA on blind source separation (sCCA) and partial least squares (PLS); 2) two semi-blind methods: parallel ICA and coefficient-constrained ICA (CC-ICA). We also propose a novel model for joint blind source separation (BSS) of two datasets using a combination of sCCA and jICA, i.e., 'CCA+ICA', which, compared with other joint BSS methods, can achieve higher decomposition accuracy as well as the correct automatic source link. Applications of the proposed model to real multitask fMRI data are compared to joint ICA and mCCA; CCA+ICA further shows its advantages in capturing both shared and distinct information, differentiating groups, and interpreting duration of illness in schizophrenia patients, hence promising applicability to a wide variety of medical imaging problems.
A Hybrid Shared-Memory Parallel Max-Tree Algorithm for Extreme Dynamic-Range Images.
Moschini, Ugo; Meijster, Arnold; Wilkinson, Michael H F
2018-03-01
Max-trees, or component trees, are graph structures that represent the connected components of an image in a hierarchical way. Nowadays, many application fields rely on images with high-dynamic range or floating point values. Efficient sequential algorithms exist to build trees and compute attributes for images of any bit depth. However, we show that the current parallel algorithms perform poorly already with integers at bit depths higher than 16 bits per pixel. We propose a parallel method combining the two worlds of flooding and merging max-tree algorithms. First, a pilot max-tree of a quantized version of the image is built in parallel using a flooding method. Later, this structure is used in a parallel leaf-to-root approach to compute efficiently the final max-tree and to drive the merging of the sub-trees computed by the threads. We present an analysis of the performance both on simulated and actual 2D images and 3D volumes. Execution times are about better than the fastest sequential algorithm and speed-up goes up to on 64 threads.
Sato, Sachiko; Rancourt, Ann; Sato, Yukiko; Satoh, Masahiko S.
2016-01-01
Mammalian cell culture has been used in many biological studies on the assumption that a cell line comprises putatively homogeneous clonal cells, thereby sharing similar phenotypic features. This fundamental assumption has not yet been fully tested; therefore, we developed a method for the chronological analysis of individual HeLa cells. The analysis was performed by live cell imaging, tracking of every single cell recorded on imaging videos, and determining the fates of individual cells. We found that cell fate varied significantly, indicating that, in contrast to the assumption, the HeLa cell line is composed of highly heterogeneous cells. Furthermore, our results reveal that only a limited number of cells are immortal and renew themselves, giving rise to the remaining cells. These cells have reduced reproductive ability, creating a functionally heterogeneous cell population. Hence, the HeLa cell line is maintained by the limited number of immortal cells, which could be putative cancer stem cells. PMID:27003384
Einstein, Andrew J.; Berman, Daniel S.; Min, James K.; Hendel, Robert C.; Gerber, Thomas C.; Carr, J. Jeffrey; Cerqueira, Manuel D.; Cullom, S. James; DeKemp, Robert; Dickert, Neal; Dorbala, Sharmila; Garcia, Ernest V.; Gibbons, Raymond J.; Halliburton, Sandra S.; Hausleiter, Jörg; Heller, Gary V.; Jerome, Scott; Lesser, John R.; Fazel, Reza; Raff, Gilbert L.; Tilkemeier, Peter; Williams, Kim A.; Shaw, Leslee J.
2014-01-01
Objective To identify key components of a radiation accountability framework fostering patient-centered imaging and shared decision-making in cardiac imaging. Background An NIH-NHLBI/NCI-sponsored symposium was held in November 2012 to address these issues. Methods Symposium participants, working in three tracks, identified key components of a framework to target critical radiation safety issues for the patient, the laboratory, and the larger population of patients with known or suspected cardiovascular disease. Results Use of ionizing radiation during an imaging procedure should be disclosed to all patients by the ordering provider at the time of ordering, and reinforced by the performing provider team. An imaging protocol with effective dose ≤3mSv is considered very low risk, not warranting extensive discussion or written consent. However, a protocol effective dose <20mSv was proposed as a level requiring particular attention in terms of shared decision-making and either formal discussion or written informed consent. Laboratory reporting of radiation dosimetry is a critical component of creating a quality laboratory fostering a patient-centered environment with transparent procedural methodology. Efforts should be directed to avoiding testing involving radiation, in patients with inappropriate indications. Standardized reporting and diagnostic reference levels for computed tomography and nuclear cardiology are important for the goal of public reporting of laboratory radiation dose levels in conjunction with diagnostic performance. Conclusions The development of cardiac imaging technologies revolutionized cardiology practice by allowing routine, noninvasive assessment of myocardial perfusion and anatomy. It is now incumbent upon the imaging community to create an accountability framework to safely drive appropriate imaging utilization. PMID:24530677
Artificial neural network-aided image analysis system for cell counting.
Sjöström, P J; Frydel, B R; Wahlberg, L U
1999-05-01
In histological preparations containing debris and synthetic materials, it is difficult to automate cell counting using standard image analysis tools, i.e., systems that rely on boundary contours, histogram thresholding, etc. In an attempt to mimic manual cell recognition, an automated cell counter was constructed using a combination of artificial intelligence and standard image analysis methods. Artificial neural network (ANN) methods were applied on digitized microscopy fields without pre-ANN feature extraction. A three-layer feed-forward network with extensive weight sharing in the first hidden layer was employed and trained on 1,830 examples using the error back-propagation algorithm on a Power Macintosh 7300/180 desktop computer. The optimal number of hidden neurons was determined and the trained system was validated by comparison with blinded human counts. System performance at 50x and lO0x magnification was evaluated. The correlation index at 100x magnification neared person-to-person variability, while 50x magnification was not useful. The system was approximately six times faster than an experienced human. ANN-based automated cell counting in noisy histological preparations is feasible. Consistent histology and computer power are crucial for system performance. The system provides several benefits, such as speed of analysis and consistency, and frees up personnel for other tasks.
Small PACS implementation using publicly available software
NASA Astrophysics Data System (ADS)
Passadore, Diego J.; Isoardi, Roberto A.; Gonzalez Nicolini, Federico J.; Ariza, P. P.; Novas, C. V.; Omati, S. A.
1998-07-01
Building cost effective PACS solutions is a main concern in developing countries. Hardware and software components are generally much more expensive than in developed countries and also more tightened financial constraints are the main reasons contributing to a slow rate of implementation of PACS. The extensive use of Internet for sharing resources and information has brought a broad number of freely available software packages to an ever-increasing number of users. In the field of medical imaging is possible to find image format conversion packages, DICOM compliant servers for all kinds of service classes, databases, web servers, image visualization, manipulation and analysis tools, etc. This paper describes a PACS implementation for review and storage built on freely available software. It currently integrates four diagnostic modalities (PET, CT, MR and NM), a Radiotherapy Treatment Planning workstation and several computers in a local area network, for image storage, database management and image review, processing and analysis. It also includes a web-based application that allows remote users to query the archive for studies from any workstation and to view the corresponding images and reports. We conclude that the advantage of using this approach is twofold. It allows a full understanding of all the issues involved in the implementation of a PACS and also contributes to keep costs down while enabling the development of a functional system for storage, distribution and review that can prove to be helpful for radiologists and referring physicians.
A Convex Formulation for Learning a Shared Predictive Structure from Multiple Tasks
Chen, Jianhui; Tang, Lei; Liu, Jun; Ye, Jieping
2013-01-01
In this paper, we consider the problem of learning from multiple related tasks for improved generalization performance by extracting their shared structures. The alternating structure optimization (ASO) algorithm, which couples all tasks using a shared feature representation, has been successfully applied in various multitask learning problems. However, ASO is nonconvex and the alternating algorithm only finds a local solution. We first present an improved ASO formulation (iASO) for multitask learning based on a new regularizer. We then convert iASO, a nonconvex formulation, into a relaxed convex one (rASO). Interestingly, our theoretical analysis reveals that rASO finds a globally optimal solution to its nonconvex counterpart iASO under certain conditions. rASO can be equivalently reformulated as a semidefinite program (SDP), which is, however, not scalable to large datasets. We propose to employ the block coordinate descent (BCD) method and the accelerated projected gradient (APG) algorithm separately to find the globally optimal solution to rASO; we also develop efficient algorithms for solving the key subproblems involved in BCD and APG. The experiments on the Yahoo webpages datasets and the Drosophila gene expression pattern images datasets demonstrate the effectiveness and efficiency of the proposed algorithms and confirm our theoretical analysis. PMID:23520249
Plis, Sergey M; Sarwate, Anand D; Wood, Dylan; Dieringer, Christopher; Landis, Drew; Reed, Cory; Panta, Sandeep R; Turner, Jessica A; Shoemaker, Jody M; Carter, Kim W; Thompson, Paul; Hutchison, Kent; Calhoun, Vince D
2016-01-01
The field of neuroimaging has embraced the need for sharing and collaboration. Data sharing mandates from public funding agencies and major journal publishers have spurred the development of data repositories and neuroinformatics consortia. However, efficient and effective data sharing still faces several hurdles. For example, open data sharing is on the rise but is not suitable for sensitive data that are not easily shared, such as genetics. Current approaches can be cumbersome (such as negotiating multiple data sharing agreements). There are also significant data transfer, organization and computational challenges. Centralized repositories only partially address the issues. We propose a dynamic, decentralized platform for large scale analyses called the Collaborative Informatics and Neuroimaging Suite Toolkit for Anonymous Computation (COINSTAC). The COINSTAC solution can include data missing from central repositories, allows pooling of both open and "closed" repositories by developing privacy-preserving versions of widely-used algorithms, and incorporates the tools within an easy-to-use platform enabling distributed computation. We present an initial prototype system which we demonstrate on two multi-site data sets, without aggregating the data. In addition, by iterating across sites, the COINSTAC model enables meta-analytic solutions to converge to "pooled-data" solutions (i.e., as if the entire data were in hand). More advanced approaches such as feature generation, matrix factorization models, and preprocessing can be incorporated into such a model. In sum, COINSTAC enables access to the many currently unavailable data sets, a user friendly privacy enabled interface for decentralized analysis, and a powerful solution that complements existing data sharing solutions.
Plis, Sergey M.; Sarwate, Anand D.; Wood, Dylan; Dieringer, Christopher; Landis, Drew; Reed, Cory; Panta, Sandeep R.; Turner, Jessica A.; Shoemaker, Jody M.; Carter, Kim W.; Thompson, Paul; Hutchison, Kent; Calhoun, Vince D.
2016-01-01
The field of neuroimaging has embraced the need for sharing and collaboration. Data sharing mandates from public funding agencies and major journal publishers have spurred the development of data repositories and neuroinformatics consortia. However, efficient and effective data sharing still faces several hurdles. For example, open data sharing is on the rise but is not suitable for sensitive data that are not easily shared, such as genetics. Current approaches can be cumbersome (such as negotiating multiple data sharing agreements). There are also significant data transfer, organization and computational challenges. Centralized repositories only partially address the issues. We propose a dynamic, decentralized platform for large scale analyses called the Collaborative Informatics and Neuroimaging Suite Toolkit for Anonymous Computation (COINSTAC). The COINSTAC solution can include data missing from central repositories, allows pooling of both open and “closed” repositories by developing privacy-preserving versions of widely-used algorithms, and incorporates the tools within an easy-to-use platform enabling distributed computation. We present an initial prototype system which we demonstrate on two multi-site data sets, without aggregating the data. In addition, by iterating across sites, the COINSTAC model enables meta-analytic solutions to converge to “pooled-data” solutions (i.e., as if the entire data were in hand). More advanced approaches such as feature generation, matrix factorization models, and preprocessing can be incorporated into such a model. In sum, COINSTAC enables access to the many currently unavailable data sets, a user friendly privacy enabled interface for decentralized analysis, and a powerful solution that complements existing data sharing solutions. PMID:27594820
The Cancer Imaging Archive (TCIA) | Informatics Technology for Cancer Research (ITCR)
TCIA is NCI’s repository for publicly shared cancer imaging data. TCIA collections include radiology and pathology images, clinical and clinical trial data, image derived annotations and quantitative features and a growing collection of related ‘omics data both from clinical and pre-clinical studies.
Integrating Digital Images into the Art and Art History Curriculum.
ERIC Educational Resources Information Center
Pitt, Sharon P.; Updike, Christina B.; Guthrie, Miriam E.
2002-01-01
Describes an Internet-based image database system connected to a flexible, in-class teaching and learning tool (the Madison Digital Image Database) developed at James Madison University to bring digital images to the arts and humanities classroom. Discusses content, copyright issues, ensuring system effectiveness, instructional impact, sharing the…
Talkoot Portals: Discover, Tag, Share, and Reuse Collaborative Science Workflows
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Ramachandran, R.; Lynnes, C.
2009-05-01
A small but growing number of scientists are beginning to harness Web 2.0 technologies, such as wikis, blogs, and social tagging, as a transformative way of doing science. These technologies provide researchers easy mechanisms to critique, suggest and share ideas, data and algorithms. At the same time, large suites of algorithms for science analysis are being made available as remotely-invokable Web Services, which can be chained together to create analysis workflows. This provides the research community an unprecedented opportunity to collaborate by sharing their workflows with one another, reproducing and analyzing research results, and leveraging colleagues' expertise to expedite the process of scientific discovery. However, wikis and similar technologies are limited to text, static images and hyperlinks, providing little support for collaborative data analysis. A team of information technology and Earth science researchers from multiple institutions have come together to improve community collaboration in science analysis by developing a customizable "software appliance" to build collaborative portals for Earth Science services and analysis workflows. The critical requirement is that researchers (not just information technologists) be able to build collaborative sites around service workflows within a few hours. We envision online communities coming together, much like Finnish "talkoot" (a barn raising), to build a shared research space. Talkoot extends a freely available, open source content management framework with a series of modules specific to Earth Science for registering, creating, managing, discovering, tagging and sharing Earth Science web services and workflows for science data processing, analysis and visualization. Users will be able to author a "science story" in shareable web notebooks, including plots or animations, backed up by an executable workflow that directly reproduces the science analysis. New services and workflows of interest will be discoverable using tag search, and advertised using "service casts" and "interest casts" (Atom feeds). Multiple science workflow systems will be plugged into the system, with initial support for UAH's Mining Workflow Composer and the open-source Active BPEL engine, and JPL's SciFlo engine and the VizFlow visual programming interface. With the ability to share and execute analysis workflows, Talkoot portals can be used to do collaborative science in addition to communicate ideas and results. It will be useful for different science domains, mission teams, research projects and organizations. Thus, it will help to solve the "sociological" problem of bringing together disparate groups of researchers, and the technical problem of advertising, discovering, developing, documenting, and maintaining inter-agency science workflows. The presentation will discuss the goals of and barriers to Science 2.0, the social web technologies employed in the Talkoot software appliance (e.g. CMS, social tagging, personal presence, advertising by feeds, etc.), illustrate the resulting collaborative capabilities, and show early prototypes of the web interfaces (e.g. embedded workflows).
A low noise stenography method for medical images with QR encoding of patient information
NASA Astrophysics Data System (ADS)
Patiño-Vanegas, Alberto; Contreras-Ortiz, Sonia H.; Martinez-Santos, Juan C.
2017-03-01
This paper proposes an approach to facilitate the process of individualization of patients from their medical images, without compromising the inherent confidentiality of medical data. The identification of a patient from a medical image is not often the goal of security methods applied to image records. Usually, any identification data is removed from shared records, and security features are applied to determine ownership. We propose a method for embedding a QR-code containing information that can be used to individualize a patient. This is done so that the image to be shared does not differ significantly from the original image. The QR-code is distributed in the image by changing several pixels according to a threshold value based on the average value of adjacent pixels surrounding the point of interest. The results show that the code can be embedded and later fully recovered with minimal changes in the UIQI index - less than 0.1% of different.
Stereoscopic Integrated Imaging Goggles for Multimodal Intraoperative Image Guidance
Mela, Christopher A.; Patterson, Carrie; Thompson, William K.; Papay, Francis; Liu, Yang
2015-01-01
We have developed novel stereoscopic wearable multimodal intraoperative imaging and display systems entitled Integrated Imaging Goggles for guiding surgeries. The prototype systems offer real time stereoscopic fluorescence imaging and color reflectance imaging capacity, along with in vivo handheld microscopy and ultrasound imaging. With the Integrated Imaging Goggle, both wide-field fluorescence imaging and in vivo microscopy are provided. The real time ultrasound images can also be presented in the goggle display. Furthermore, real time goggle-to-goggle stereoscopic video sharing is demonstrated, which can greatly facilitate telemedicine. In this paper, the prototype systems are described, characterized and tested in surgeries in biological tissues ex vivo. We have found that the system can detect fluorescent targets with as low as 60 nM indocyanine green and can resolve structures down to 0.25 mm with large FOV stereoscopic imaging. The system has successfully guided simulated cancer surgeries in chicken. The Integrated Imaging Goggle is novel in 4 aspects: it is (a) the first wearable stereoscopic wide-field intraoperative fluorescence imaging and display system, (b) the first wearable system offering both large FOV and microscopic imaging simultaneously, (c) the first wearable system that offers both ultrasound imaging and fluorescence imaging capacities, and (d) the first demonstration of goggle-to-goggle communication to share stereoscopic views for medical guidance. PMID:26529249
Jiang, Xiaoqian; Sarwate, Anand D.; Ohno-Machado, Lucila
2013-01-01
Objective Effective data sharing is critical for comparative effectiveness research (CER), but there are significant concerns about inappropriate disclosure of patient data. These concerns have spurred the development of new technologies for privacy preserving data sharing and data mining. Our goal is to review existing and emerging techniques that may be appropriate for data sharing related to CER. Material and methods We adapted a systematic review methodology to comprehensively search the research literature. We searched 7 databases and applied three stages of filtering based on titles, abstracts, and full text to identify those works most relevant to CER. Results Based on agreement and using the arbitrage of a third party expert, we selected 97 articles for meta-analysis. Our findings are organized along major types of data sharing in CER applications (i.e., institution-to-institution, institution-hosted, and public release). We made recommendations based on specific scenarios. Limitation We limited the scope of our study to methods that demonstrated practical impact, eliminating many theoretical studies of privacy that have been surveyed elsewhere. We further limited our study to data sharing for data tables, rather than complex genomic, set-valued, time series, text, image, or network data. Conclusion State-of-the-art privacy preserving technologies can guide the development of practical tools that will scale up the CER studies of the future. However, many challenges remain in this fast moving field in terms of practical evaluations as well as applications to a wider range of data types. PMID:23774511
Secure public cloud platform for medical images sharing.
Pan, Wei; Coatrieux, Gouenou; Bouslimi, Dalel; Prigent, Nicolas
2015-01-01
Cloud computing promises medical imaging services offering large storage and computing capabilities for limited costs. In this data outsourcing framework, one of the greatest issues to deal with is data security. To do so, we propose to secure a public cloud platform devoted to medical image sharing by defining and deploying a security policy so as to control various security mechanisms. This policy stands on a risk assessment we conducted so as to identify security objectives with a special interest for digital content protection. These objectives are addressed by means of different security mechanisms like access and usage control policy, partial-encryption and watermarking.
NASA Astrophysics Data System (ADS)
Schmidt, T.; Kalisch, J.; Lorenz, E.; Heinemann, D.
2015-10-01
Clouds are the dominant source of variability in surface solar radiation and uncertainty in its prediction. However, the increasing share of solar energy in the world-wide electric power supply increases the need for accurate solar radiation forecasts. In this work, we present results of a shortest-term global horizontal irradiance (GHI) forecast experiment based on hemispheric sky images. A two month dataset with images from one sky imager and high resolutive GHI measurements from 99 pyranometers distributed over 10 km by 12 km is used for validation. We developed a multi-step model and processed GHI forecasts up to 25 min with an update interval of 15 s. A cloud type classification is used to separate the time series in different cloud scenarios. Overall, the sky imager based forecasts do not outperform the reference persistence forecasts. Nevertheless, we find that analysis and forecast performance depend strongly on the predominant cloud conditions. Especially convective type clouds lead to high temporal and spatial GHI variability. For cumulus cloud conditions, the analysis error is found to be lower than that introduced by a single pyranometer if it is used representatively for the whole area in distances from the camera larger than 1-2 km. Moreover, forecast skill is much higher for these conditions compared to overcast or clear sky situations causing low GHI variability which is easier to predict by persistence. In order to generalize the cloud-induced forecast error, we identify a variability threshold indicating conditions with positive forecast skill.
Schoenhagen, Paul; Zimmermann, Mathis; Falkner, Juergen
2013-06-01
Degenerative aortic stenosis is highly prevalent in the aging populations of industrialized countries and is associated with poor prognosis. Surgical valve replacement has been the only established treatment with documented improvement of long-term outcome. However, many of the older patients with aortic stenosis (AS) are high-risk or ineligible for surgery. For these patients, transcatheter aortic valve replacement (TAVR) has emerged as a treatment alternative. The TAVR procedure is characterized by a lack of visualization of the operative field. Therefore, pre- and intra-procedural imaging is critical for patient selection, pre-procedural planning, and intra-operative decision-making. Incremental to conventional angiography and 2-D echocardiography, multidetector computed tomography (CT) has assumed an important role before TAVR. The analysis of 3-D CT data requires extensive post-processing during direct interaction with the dataset, using advance analysis software. Organization and storage of the data according to complex clinical workflows and sharing of image information have become a critical part of these novel treatment approaches. Optimally, the data are integrated into a comprehensive image data file accessible to multiple groups of practitioners across the hospital. This creates new challenges for data management requiring a complex IT infrastructure, spanning across multiple locations, but is increasingly achieved with client-server solutions and private cloud technology. This article describes the challenges and opportunities created by the increased amount of patient-specific imaging data in the context of TAVR.
2000-07-01
UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADPO1 1348 TITLE: Internet Color Imaging DISTRIBUTION: Approved for public...Paper Internet Color Imaging Hsien-Che Lee Imaging Science and Technology Laboratory Eastman Kodak Company, Rochester, New York 14650-1816 USA...ABSTRACT The sharing and exchange of color images over the Internet pose very challenging problems to color science and technology . Emerging color standards
76 FR 38404 - Center for Scientific Review; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-30
...; Shared Instrumentation: Grant Program Ultrasound Imaging S10. Date: July 19, 2011. Time: 1 p.m. to 5 p.m... Trials for Imaging and Image-Guided Interventions; Exploratory Grants. Date: July 14, 2011. Time: 1 p.m...
Unobtrusive integration of data management with fMRI analysis.
Poliakov, Andrew V; Hertzenberg, Xenia; Moore, Eider B; Corina, David P; Ojemann, George A; Brinkley, James F
2007-01-01
This note describes a software utility, called X-batch which addresses two pressing issues typically faced by functional magnetic resonance imaging (fMRI) neuroimaging laboratories (1) analysis automation and (2) data management. The first issue is addressed by providing a simple batch mode processing tool for the popular SPM software package (http://www.fil.ion. ucl.ac.uk/spm/; Welcome Department of Imaging Neuroscience, London, UK). The second is addressed by transparently recording metadata describing all aspects of the batch job (e.g., subject demographics, analysis parameters, locations and names of created files, date and time of analysis, and so on). These metadata are recorded as instances of an extended version of the Protégé-based Experiment Lab Book ontology created by the Dartmouth fMRI Data Center. The resulting instantiated ontology provides a detailed record of all fMRI analyses performed, and as such can be part of larger systems for neuroimaging data management, sharing, and visualization. The X-batch system is in use in our own fMRI research, and is available for download at http://X-batch.sourceforge.net/.
Detector motion method to increase spatial resolution in photon-counting detectors
NASA Astrophysics Data System (ADS)
Lee, Daehee; Park, Kyeongjin; Lim, Kyung Taek; Cho, Gyuseong
2017-03-01
Medical imaging requires high spatial resolution of an image to identify fine lesions. Photon-counting detectors in medical imaging have recently been rapidly replacing energy-integrating detectors due to the former`s high spatial resolution, high efficiency and low noise. Spatial resolution in a photon counting image is determined by the pixel size. Therefore, the smaller the pixel size, the higher the spatial resolution that can be obtained in an image. However, detector redesigning is required to reduce pixel size, and an expensive fine process is required to integrate a signal processing unit with reduced pixel size. Furthermore, as the pixel size decreases, charge sharing severely deteriorates spatial resolution. To increase spatial resolution, we propose a detector motion method using a large pixel detector that is less affected by charge sharing. To verify the proposed method, we utilized a UNO-XRI photon-counting detector (1-mm CdTe, Timepix chip) at the maximum X-ray tube voltage of 80 kVp. A similar spatial resolution of a 55- μm-pixel image was achieved by application of the proposed method to a 110- μm-pixel detector with a higher signal-to-noise ratio. The proposed method could be a way to increase spatial resolution without a pixel redesign when pixels severely suffer from charge sharing as pixel size is reduced.
Onychomycosis diagnosis using fluorescence and infrared imaging systems
NASA Astrophysics Data System (ADS)
da Silva, Ana Paula; Fortunato, Thereza Cury; Stringasci, Mirian D.; Kurachi, Cristina; Bagnato, Vanderlei S.; Inada, Natalia M.
2015-06-01
Onychomycosis is a common disease of the nail plate, constituting approximately half of all cases of nail infection. Onychomycosis diagnosis is challenging because it is hard to distinguish from other diseases of the nail lamina such as psoriasis, lichen ruber or eczematous nails. The existing methods of diagnostics so far consist of clinical and laboratory analysis, such as: Direct Mycological examination and culture, PCR and histopathology with PAS staining. However, they all share certain disadvantages in terms of sensitivity and specificity, time delay, or cost. This study aimed to evaluate the use of infrared and fluorescence imaging as new non-invasive diagnostic tools in patients with suspected onychomycosis, and compare them with established techniques. For fluorescence analysis, a Clinical Evince (MM Optics®) was used, which consists of an optical assembly with UV LED light source wavelength 400 nm +/- 10 nm and the maximum light intensity: 40 mW/cm2 +/- 20%. For infrared analysis, a Fluke® Camera FKL model Ti400 was used. Patients with onychomycosis and control group were analyzed for comparison. The fluorescence images were processed using MATLAB® routines, and infrared images were analyzed using the SmartView® 3.6 software analysis provided by the company Fluke®. The results demonstrated that both infrared and fluorescence could be complementary to diagnose different types of onychomycosis lesions. The simplicity of operation, quick response and non-invasive assessment of the nail patients in real time, are important factors to be consider for an implementation.
Veeraraghavan, Rengasayee; Gourdie, Robert G
2016-11-07
The spatial association between proteins is crucial to understanding how they function in biological systems. Colocalization analysis of fluorescence microscopy images is widely used to assess this. However, colocalization analysis performed on two-dimensional images with diffraction-limited resolution merely indicates that the proteins are within 200-300 nm of each other in the xy-plane and within 500-700 nm of each other along the z-axis. Here we demonstrate a novel three-dimensional quantitative analysis applicable to single-molecule positional data: stochastic optical reconstruction microscopy-based relative localization analysis (STORM-RLA). This method offers significant advantages: 1) STORM imaging affords 20-nm resolution in the xy-plane and <50 nm along the z-axis; 2) STORM-RLA provides a quantitative assessment of the frequency and degree of overlap between clusters of colabeled proteins; and 3) STORM-RLA also calculates the precise distances between both overlapping and nonoverlapping clusters in three dimensions. Thus STORM-RLA represents a significant advance in the high-throughput quantitative assessment of the spatial organization of proteins. © 2016 Veeraraghavan and Gourdie. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
A system for programming experiments and for recording and analyzing data automatically1
Herrick, Robert M.; Denelsbeck, John S.
1963-01-01
A system designed for use in complex operant conditioning experiments is described. Some of its key features are: (a) plugboards that permit the experimenter to change either from one program to another or from one analysis to another in less than a minute, (b) time-sharing of permanently-wired, electronic logic components, (c) recordings suitable for automatic analyses. Included are flow diagrams of the system and sample logic diagrams for programming experiments and for analyzing data. ImagesFig. 4. PMID:14055967
ERIC Educational Resources Information Center
Szekely, George
2011-01-01
As hands-on environmental observers, children use printing to save and share "treasures" they find. In this article, the author shares thoughts which are based on observing what children find valuable and worth saving, and the printmaking processes used to "lift" images from their finds.
The shared neural basis of music and language.
Yu, Mengxia; Xu, Miao; Li, Xueting; Chen, Zhencai; Song, Yiying; Liu, Jia
2017-08-15
Human musical ability is proposed to play a key phylogenetical role in the evolution of language, and the similarity of hierarchical structure in music and language has led to considerable speculation about their shared mechanisms. While behavioral and electrophysioglocial studies have revealed associations between music and linguistic abilities, results from functional magnetic resonance imaging (fMRI) studies on their relations are contradictory, possibly because these studies usually treat music or language as single entities without breaking down to their components. Here, we examined the relations between different components of music (i.e., melodic and rhythmic analysis) and language (i.e., semantic and phonological processing) using both behavioral tests and resting-state fMRI. Behaviorally, we found that individuals with music training experiences were better at semantic processing, but not at phonological processing, than those without training. Further correlation analyses showed that semantic processing of language was related to melodic, but not rhythmic, analysis of music. Neurally, we found that performances in both semantic processing and melodic analysis were correlated with spontaneous brain activities in the bilateral precentral gyrus (PCG) and superior temporal plane at the regional level, and with the resting-state functional connectivity of the left PCG with the left supramarginal gyrus and left superior temporal gyrus at the network level. Together, our study revealed the shared spontaneous neural basis of music and language based on the behavioral link between melodic analysis and semantic processing, which possibly relied on a common mechanism of automatic auditory-motor integration. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
Viangteeravat, Teeradache; Anyanwu, Matthew N; Ra Nagisetty, Venkateswara; Kuscu, Emin
2011-07-15
Massive datasets comprising high-resolution images, generated in neuro-imaging studies and in clinical imaging research, are increasingly challenging our ability to analyze, share, and filter such images in clinical and basic translational research. Pivot collection exploratory analysis provides each user the ability to fully interact with the massive amounts of visual data to fully facilitate sufficient sorting, flexibility and speed to fluidly access, explore or analyze the massive image data sets of high-resolution images and their associated meta information, such as neuro-imaging databases from the Allen Brain Atlas. It is used in clustering, filtering, data sharing and classifying of the visual data into various deep zoom levels and meta information categories to detect the underlying hidden pattern within the data set that has been used. We deployed prototype Pivot collections using the Linux CentOS running on the Apache web server. We also tested the prototype Pivot collections on other operating systems like Windows (the most common variants) and UNIX, etc. It is demonstrated that the approach yields very good results when compared with other approaches used by some researchers for generation, creation, and clustering of massive image collections such as the coronal and horizontal sections of the mouse brain from the Allen Brain Atlas. Pivot visual analytics was used to analyze a prototype of dataset Dab2 co-expressed genes from the Allen Brain Atlas. The metadata along with high-resolution images were automatically extracted using the Allen Brain Atlas API. It is then used to identify the hidden information based on the various categories and conditions applied by using options generated from automated collection. A metadata category like chromosome, as well as data for individual cases like sex, age, and plan attributes of a particular gene, is used to filter, sort and to determine if there exist other genes with a similar characteristics to Dab2. And online access to the mouse brain pivot collection can be viewed using the link http://edtech-dev.uthsc.edu/CTSI/teeDev1/unittest/PaPa/collection.html (user name: tviangte and password: demome) Our proposed algorithm has automated the creation of large image Pivot collections; this will enable investigators of clinical research projects to easily and quickly analyse the image collections through a perspective that is useful for making critical decisions about the image patterns discovered.
Content-based histopathology image retrieval using CometCloud.
Qi, Xin; Wang, Daihou; Rodero, Ivan; Diaz-Montes, Javier; Gensure, Rebekah H; Xing, Fuyong; Zhong, Hua; Goodell, Lauri; Parashar, Manish; Foran, David J; Yang, Lin
2014-08-26
The development of digital imaging technology is creating extraordinary levels of accuracy that provide support for improved reliability in different aspects of the image analysis, such as content-based image retrieval, image segmentation, and classification. This has dramatically increased the volume and rate at which data are generated. Together these facts make querying and sharing non-trivial and render centralized solutions unfeasible. Moreover, in many cases this data is often distributed and must be shared across multiple institutions requiring decentralized solutions. In this context, a new generation of data/information driven applications must be developed to take advantage of the national advanced cyber-infrastructure (ACI) which enable investigators to seamlessly and securely interact with information/data which is distributed across geographically disparate resources. This paper presents the development and evaluation of a novel content-based image retrieval (CBIR) framework. The methods were tested extensively using both peripheral blood smears and renal glomeruli specimens. The datasets and performance were evaluated by two pathologists to determine the concordance. The CBIR algorithms that were developed can reliably retrieve the candidate image patches exhibiting intensity and morphological characteristics that are most similar to a given query image. The methods described in this paper are able to reliably discriminate among subtle staining differences and spatial pattern distributions. By integrating a newly developed dual-similarity relevance feedback module into the CBIR framework, the CBIR results were improved substantially. By aggregating the computational power of high performance computing (HPC) and cloud resources, we demonstrated that the method can be successfully executed in minutes on the Cloud compared to weeks using standard computers. In this paper, we present a set of newly developed CBIR algorithms and validate them using two different pathology applications, which are regularly evaluated in the practice of pathology. Comparative experimental results demonstrate excellent performance throughout the course of a set of systematic studies. Additionally, we present and evaluate a framework to enable the execution of these algorithms across distributed resources. We show how parallel searching of content-wise similar images in the dataset significantly reduces the overall computational time to ensure the practical utility of the proposed CBIR algorithms.
A flexible, open, decentralized system for digital pathology networks.
Schuler, Robert; Smith, David E; Kumaraguruparan, Gowri; Chervenak, Ann; Lewis, Anne D; Hyde, Dallas M; Kesselman, Carl
2012-01-01
High-resolution digital imaging is enabling digital archiving and sharing of digitized microscopy slides and new methods for digital pathology. Collaborative research centers, outsourced medical services, and multi-site organizations stand to benefit from sharing pathology data in a digital pathology network. Yet significant technological challenges remain due to the large size and volume of digitized whole slide images. While information systems do exist for managing local pathology laboratories, they tend to be oriented toward narrow clinical use cases or offer closed ecosystems around proprietary formats. Few solutions exist for networking digital pathology operations. Here we present a system architecture and implementation of a digital pathology network and share results from a production system that federates major research centers.
A Flexible, Open, Decentralized System for Digital Pathology Networks
SMITH, David E.; KUMARAGURUPARAN, Gowri; CHERVENAK, Ann; LEWIS, Anne D.; HYDE, Dallas M.; KESSELMAN, Carl
2014-01-01
High-resolution digital imaging is enabling digital archiving and sharing of digitized microscopy slides and new methods for digital pathology. Collaborative research centers, outsourced medical services, and multi-site organizations stand to benefit from sharing pathology data in a digital pathology network. Yet significant technological challenges remain due to the large size and volume of digitized whole slide images. While information systems do exist for managing local pathology laboratories, they tend to be oriented toward narrow clinical use cases or offer closed ecosystems around proprietary formats. Few solutions exist for networking digital pathology operations. Here we present a system architecture and implementation of a digital pathology network and share results from a production system that federates major research centers. PMID:22941985
Bioimage informatics for experimental biology
Swedlow, Jason R.; Goldberg, Ilya G.; Eliceiri, Kevin W.
2012-01-01
Over the last twenty years there have been great advances in light microscopy with the result that multi-dimensional imaging has driven a revolution in modern biology. The development of new approaches of data acquisition are reportedly frequently, and yet the significant data management and analysis challenges presented by these new complex datasets remains largely unsolved. Like the well-developed field of genome bioinformatics, central repositories are and will be key resources, but there is a critical need for informatics tools in individual laboratories to help manage, share, visualize, and analyze image data. In this article we present the recent efforts by the bioimage informatics community to tackle these challenges and discuss our own vision for future development of bioimage informatics solution. PMID:19416072
Elders' Life Stories: Impact on the Next Generation of Health Professionals
2013-01-01
The purpose of this study was to pilot an enhanced version of the “Share your Life Story” life review writing workshop. The enhanced version included the addition of an intergenerational exchange, based on the content of seniors' writings, with students planning careers in the health sciences. The researcher employed a mixed methods design. Preliminary results using descriptive analysis revealed an increase in positive images of aging and a decrease in negative images of aging among the five student participants. Qualitative results revealed six themes that illuminate the hows and whys of the quantitative results as well as additional program benefits. Feedback from students and seniors helped to refine the intergenerational protocol for a larger scale study. PMID:24027579
Dual-Color Luciferase Complementation for Chemokine Receptor Signaling.
Luker, Kathryn E; Luker, Gary D
2016-01-01
Chemokine receptors may share common ligands, setting up potential competition for ligand binding, and association of activated receptors with downstream signaling molecules such as β-arrestin. Determining the "winner" of competition for shared effector molecules is essential for understanding integrated functions of chemokine receptor signaling in normal physiology, disease, and response to therapy. We describe a dual-color click beetle luciferase complementation assay for cell-based analysis of interactions of two different chemokine receptors, CXCR4 and ACKR3, with the intracellular scaffolding protein β-arrestin 2. This assay provides real-time quantification of receptor activation and signaling in response to chemokine CXCL12. More broadly, this general imaging strategy can be applied to quantify interactions of any set of two proteins that interact with a common binding partner. © 2016 Elsevier Inc. All rights reserved.
ROS-IGTL-Bridge: an open network interface for image-guided therapy using the ROS environment.
Frank, Tobias; Krieger, Axel; Leonard, Simon; Patel, Niravkumar A; Tokuda, Junichi
2017-08-01
With the growing interest in advanced image-guidance for surgical robot systems, rapid integration and testing of robotic devices and medical image computing software are becoming essential in the research and development. Maximizing the use of existing engineering resources built on widely accepted platforms in different fields, such as robot operating system (ROS) in robotics and 3D Slicer in medical image computing could simplify these tasks. We propose a new open network bridge interface integrated in ROS to ensure seamless cross-platform data sharing. A ROS node named ROS-IGTL-Bridge was implemented. It establishes a TCP/IP network connection between the ROS environment and external medical image computing software using the OpenIGTLink protocol. The node exports ROS messages to the external software over the network and vice versa simultaneously, allowing seamless and transparent data sharing between the ROS-based devices and the medical image computing platforms. Performance tests demonstrated that the bridge could stream transforms, strings, points, and images at 30 fps in both directions successfully. The data transfer latency was <1.2 ms for transforms, strings and points, and 25.2 ms for color VGA images. A separate test also demonstrated that the bridge could achieve 900 fps for transforms. Additionally, the bridge was demonstrated in two representative systems: a mock image-guided surgical robot setup consisting of 3D slicer, and Lego Mindstorms with ROS as a prototyping and educational platform for IGT research; and the smart tissue autonomous robot surgical setup with 3D Slicer. The study demonstrated that the bridge enabled cross-platform data sharing between ROS and medical image computing software. This will allow rapid and seamless integration of advanced image-based planning/navigation offered by the medical image computing software such as 3D Slicer into ROS-based surgical robot systems.
Distributed deep learning networks among institutions for medical imaging.
Chang, Ken; Balachandar, Niranjan; Lam, Carson; Yi, Darvin; Brown, James; Beers, Andrew; Rosen, Bruce; Rubin, Daniel L; Kalpathy-Cramer, Jayashree
2018-03-29
Deep learning has become a promising approach for automated support for clinical diagnosis. When medical data samples are limited, collaboration among multiple institutions is necessary to achieve high algorithm performance. However, sharing patient data often has limitations due to technical, legal, or ethical concerns. In this study, we propose methods of distributing deep learning models as an attractive alternative to sharing patient data. We simulate the distribution of deep learning models across 4 institutions using various training heuristics and compare the results with a deep learning model trained on centrally hosted patient data. The training heuristics investigated include ensembling single institution models, single weight transfer, and cyclical weight transfer. We evaluated these approaches for image classification in 3 independent image collections (retinal fundus photos, mammography, and ImageNet). We find that cyclical weight transfer resulted in a performance that was comparable to that of centrally hosted patient data. We also found that there is an improvement in the performance of cyclical weight transfer heuristic with a high frequency of weight transfer. We show that distributing deep learning models is an effective alternative to sharing patient data. This finding has implications for any collaborative deep learning study.
ESR paper on the proper use of mobile devices in radiology.
2018-04-01
Mobile devices (smartphones, tablets, etc.) have become key methods of communication, data access and data sharing for the population in the past decade. The technological capabilities of these devices have expanded very rapidly; for example, their in-built cameras have largely replaced conventional cameras. Their processing power is often sufficient to handle the large data sets of radiology studies and to manipulate images and studies directly on hand-held devices. Thus, they can be used to transmit and view radiology studies, often in locations remote from the source of the imaging data. They are not recommended for primary interpretation of radiology studies, but they facilitate sharing of studies for second opinions, viewing of studies and reports by clinicians at the bedside, etc. Other potential applications include remote participation in educational activity (e.g. webinars) and consultation of online educational content, e-books, journals and reference sources. Social-networking applications can be used for exchanging professional information and teaching. Users of mobile device must be aware of the vulnerabilities and dangers of their use, in particular regarding the potential for inappropriate sharing of confidential patient information, and must take appropriate steps to protect confidential data. • Mobile devices have revolutionized communication in the past decade, and are now ubiquitous. • Mobile devices have sufficient processing power to manipulate and display large data sets of radiological images. • Mobile devices allow transmission & sharing of radiologic studies for purposes of second opinions, bedside review of images, teaching, etc. • Mobile devices are currently not recommended as tools for primary interpretation of radiologic studies. • The use of mobile devices for image and data transmission carries risks, especially regarding confidentiality, which must be considered.
... and Procedures Medical Imaging Medical X-ray Imaging X-Rays, Pregnancy and You Share Tweet Linkedin Pin ... the decision with your doctor. What Kind of X-Rays Can Affect the Unborn Child? During most ...
Information Sharing and Knowledge Sharing as Communicative Activities
ERIC Educational Resources Information Center
Savolainen, Reijo
2017-01-01
Introduction: This paper elaborates the picture of information sharing and knowledge sharing as forms of communicative activity. Method: A conceptual analysis was made to find out how researchers have approached information sharing and knowledge sharing from the perspectives of transmission and ritual. The findings are based on the analysis of one…
SCIFIO: an extensible framework to support scientific image formats.
Hiner, Mark C; Rueden, Curtis T; Eliceiri, Kevin W
2016-12-07
No gold standard exists in the world of scientific image acquisition; a proliferation of instruments each with its own proprietary data format has made out-of-the-box sharing of that data nearly impossible. In the field of light microscopy, the Bio-Formats library was designed to translate such proprietary data formats to a common, open-source schema, enabling sharing and reproduction of scientific results. While Bio-Formats has proved successful for microscopy images, the greater scientific community was lacking a domain-independent framework for format translation. SCIFIO (SCientific Image Format Input and Output) is presented as a freely available, open-source library unifying the mechanisms of reading and writing image data. The core of SCIFIO is its modular definition of formats, the design of which clearly outlines the components of image I/O to encourage extensibility, facilitated by the dynamic discovery of the SciJava plugin framework. SCIFIO is structured to support coexistence of multiple domain-specific open exchange formats, such as Bio-Formats' OME-TIFF, within a unified environment. SCIFIO is a freely available software library developed to standardize the process of reading and writing scientific image formats.
Image Re-Ranking Based on Topic Diversity.
Qian, Xueming; Lu, Dan; Wang, Yaxiong; Zhu, Li; Tang, Yuan Yan; Wang, Meng
2017-08-01
Social media sharing Websites allow users to annotate images with free tags, which significantly contribute to the development of the web image retrieval. Tag-based image search is an important method to find images shared by users in social networks. However, how to make the top ranked result relevant and with diversity is challenging. In this paper, we propose a topic diverse ranking approach for tag-based image retrieval with the consideration of promoting the topic coverage performance. First, we construct a tag graph based on the similarity between each tag. Then, the community detection method is conducted to mine the topic community of each tag. After that, inter-community and intra-community ranking are introduced to obtain the final retrieved results. In the inter-community ranking process, an adaptive random walk model is employed to rank the community based on the multi-information of each topic community. Besides, we build an inverted index structure for images to accelerate the searching process. Experimental results on Flickr data set and NUS-Wide data sets show the effectiveness of the proposed approach.
Application of visual cryptography for learning in optics and photonics
NASA Astrophysics Data System (ADS)
Mandal, Avikarsha; Wozniak, Peter; Vauderwange, Oliver; Curticapean, Dan
2016-09-01
In the age data digitalization, important applications of optics and photonics based sensors and technology lie in the field of biometrics and image processing. Protecting user data in a safe and secure way is an essential task in this area. However, traditional cryptographic protocols rely heavily on computer aided computation. Secure protocols which rely only on human interactions are usually simpler to understand. In many scenarios development of such protocols are also important for ease of implementation and deployment. Visual cryptography (VC) is an encryption technique on images (or text) in which decryption is done by human visual system. In this technique, an image is encrypted into number of pieces (known as shares). When the printed shares are physically superimposed together, the image can be decrypted with human vision. Modern digital watermarking technologies can be combined with VC for image copyright protection where the shares can be watermarks (small identification) embedded in the image. Similarly, VC can be used for improving security of biometric authentication. This paper presents about design and implementation of a practical laboratory experiment based on the concept of VC for a course in media engineering. Specifically, our contribution deals with integration of VC in different schemes for applications like digital watermarking and biometric authentication in the field of optics and photonics. We describe theoretical concepts and propose our infrastructure for the experiment. Finally, we will evaluate the learning outcome of the experiment, performed by the students.
Loos, G; Moreau, J; Miroir, J; Benhaïm, C; Biau, J; Caillé, C; Bellière, A; Lapeyre, M
2013-10-01
The various image-guided radiotherapy techniques raise the question of how to achieve the control of patient positioning before irradiation session and sharing of tasks between radiation oncologists and radiotherapy technicians. We have put in place procedures and operating methods to make a partial delegation of tasks to radiotherapy technicians and secure the process in three situations: control by orthogonal kV imaging (kV-kV) of bony landmarks, control by kV-kV imaging of intraprostatic fiducial goldmarkers and control by cone beam CT (CBCT) imaging for prostate cancer. Significant medical overtime is required to control these three IGRT techniques. Because of their competence in imaging, these daily controls can be delegated to radiotherapy technicians. However, to secure the process, initial training and regular evaluation are essential. The analysis of the comparison of the use of kV/kV on bone structures allowed us to achieve a partial delegation of control to radiotherapy technicians. Controlling the positioning of the prostate through the use and automatic registration of fiducial goldmarkers allows better tracking of the prostate and can be easily delegated to radiotherapy technicians. The analysis of the use of daily cone beam CT for patients treated with intensity modulated irradiation is underway, and a comparison of practices between radiotherapy technicians and radiation oncologists is ongoing to know if a partial delegation of this control is possible. Copyright © 2013. Published by Elsevier SAS.
Wang, Po-Shan; Wu, Hsiu-Mei; Lin, Ching-Po; Soong, Bing-Wen
2011-07-01
We performed diffusion tensor imaging to determine if multiple system atrophy (MSA)-cerebellar (C) and MSA-Parkinsonism (P) show similar changes, as shown in pathological studies. Nineteen patients with MSA-C, 12 patients with MSA-P, 20 patients with Parkinson disease, and 20 healthy controls were evaluated with the use of voxel-based morphometry analysis of diffusion tensor imaging. There was an increase in apparent diffusion coefficient values in the middle cerebellar peduncles and cerebellum and a decrease in fractional anisotropy in the pyramidal tract, middle cerebellar peduncles, and white matter of the cerebellum in patients with MSA-C and MSA-P compared to the controls (P < 0.001). In addition, isotropic diffusion-weighted image values were reduced in the cerebellar cortex and deep cerebellar nuclei in patients with MSA-C and increased in the basal ganglia in patients with MSA-P. These results indicate that despite their disparate clinical manifestations, patients with MSA-C and MSA-P share similar diffusion tensor imaging features in the infratentorial region. Further, the combination of FA, ADC and iDWI images can be used to distinguish between MSA (either form) and Parkinson disease, which has potential therapeutic implications.
NASA Astrophysics Data System (ADS)
Michaelis, A.; Nemani, R. R.; Wang, W.; Votava, P.; Hashimoto, H.
2010-12-01
Given the increasing complexity of climate modeling and analysis tools, it is often difficult and expensive to build or recreate an exact replica of the software compute environment used in past experiments. With the recent development of new technologies for hardware virtualization, an opportunity exists to create full modeling, analysis and compute environments that are “archiveable”, transferable and may be easily shared amongst a scientific community or presented to a bureaucratic body if the need arises. By encapsulating and entire modeling and analysis environment in a virtual machine image, others may quickly gain access to the fully built system used in past experiments, potentially easing the task and reducing the costs of reproducing and verify past results produced by other researchers. Moreover, these virtual machine images may be used as a pedagogical tool for others that are interested in performing an academic exercise but don't yet possess the broad expertise required. We built two virtual machine images, one with the Community Earth System Model (CESM) and one with Weather Research Forecast Model (WRF), then ran several small experiments to assess the feasibility, performance overheads costs, reusability, and transferability. We present a list of the pros and cons as well as lessoned learned from utilizing virtualization technology in the climate and earth systems modeling domain.
Cheating prevention in visual cryptography.
Hu, Chih-Ming; Tzeng, Wen-Guey
2007-01-01
Visual cryptography (VC) is a method of encrypting a secret image into shares such that stacking a sufficient number of shares reveals the secret image. Shares are usually presented in transparencies. Each participant holds a transparency. Most of the previous research work on VC focuses on improving two parameters: pixel expansion and contrast. In this paper, we studied the cheating problem in VC and extended VC. We considered the attacks of malicious adversaries who may deviate from the scheme in any way. We presented three cheating methods and applied them on attacking existent VC or extended VC schemes. We improved one cheat-preventing scheme. We proposed a generic method that converts a VCS to another VCS that has the property of cheating prevention. The overhead of the conversion is near optimal in both contrast degression and pixel expansion.
A Combined Laser-Communication and Imager for Microspacecraft (ACLAIM)
NASA Technical Reports Server (NTRS)
Hemmati, H.; Lesh, J.
1998-01-01
ACLAIM is a multi-function instrument consisting of a laser communication terminal and an imaging camera that share a common telescope. A single APS- (Active Pixel Sensor) based focal-plane-array is used to perform both the acquisition and tracking (for laser communication) and science imaging functions.
Optimization of Single-Sided Charge-Sharing Strip Detectors
NASA Technical Reports Server (NTRS)
Hamel, L.A.; Benoit, M.; Donmez, B.; Macri, J. R.; McConnell, M. L.; Ryan, J. M.; Narita, T.
2006-01-01
Simulation of the charge sharing properties of single-sided CZT strip detectors with small anode pads are presented. The effect of initial event size, carrier repulsion, diffusion, drift, trapping and detrapping are considered. These simulations indicate that such a detector with a 150 m pitch will provide good charge sharing between neighboring pads. This is supported by a comparison of simulations and measurements for a similar detector with a coarser pitch of 225 m that could not provide sufficient sharing. The performance of such a detector used as a gamma-ray imager is discussed.
NASA Astrophysics Data System (ADS)
Schmidt, Thomas; Kalisch, John; Lorenz, Elke; Heinemann, Detlev
2016-03-01
Clouds are the dominant source of small-scale variability in surface solar radiation and uncertainty in its prediction. However, the increasing share of solar energy in the worldwide electric power supply increases the need for accurate solar radiation forecasts. In this work, we present results of a very short term global horizontal irradiance (GHI) forecast experiment based on hemispheric sky images. A 2-month data set with images from one sky imager and high-resolution GHI measurements from 99 pyranometers distributed over 10 km by 12 km is used for validation. We developed a multi-step model and processed GHI forecasts up to 25 min with an update interval of 15 s. A cloud type classification is used to separate the time series into different cloud scenarios. Overall, the sky-imager-based forecasts do not outperform the reference persistence forecasts. Nevertheless, we find that analysis and forecast performance depends strongly on the predominant cloud conditions. Especially convective type clouds lead to high temporal and spatial GHI variability. For cumulus cloud conditions, the analysis error is found to be lower than that introduced by a single pyranometer if it is used representatively for the whole area in distances from the camera larger than 1-2 km. Moreover, forecast skill is much higher for these conditions compared to overcast or clear sky situations causing low GHI variability, which is easier to predict by persistence. In order to generalize the cloud-induced forecast error, we identify a variability threshold indicating conditions with positive forecast skill.
Cloning and characterization of a Candida albicans maltase gene involved in sucrose utilization.
Geber, A; Williamson, P R; Rex, J H; Sweeney, E C; Bennett, J E
1992-01-01
In order to isolate the structural gene involved in sucrose utilization, we screened a sucrose-induced Candida albicans cDNA library for clones expressing alpha-glucosidase activity. The C. albicans maltase structural gene (CAMAL2) was isolated. No other clones expressing alpha-glucosidase activity. were detected. A genomic CAMAL2 clone was obtained by screening a size-selected genomic library with the cDNA clone. DNA sequence analysis reveals that CAMAL2 encodes a 570-amino-acid protein which shares 50% identity with the maltase structural gene (MAL62) of Saccharomyces carlsbergensis. The substrate specificity of the recombinant protein purified from Escherichia coli identifies the enzyme as a maltase. Northern (RNA) analysis reveals that transcription of CAMAL2 is induced by maltose and sucrose and repressed by glucose. These results suggest that assimilation of sucrose in C. albicans relies on an inducible maltase enzyme. The family of genes controlling sucrose utilization in C. albicans shares similarities with the MAL gene family of Saccharomyces cerevisiae and provides a model system for studying gene regulation in this pathogenic yeast. Images PMID:1400249
Talkoot Portals: Discover, Tag, Share, and Reuse Collaborative Science Workflows (Invited)
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Ramachandran, R.; Lynnes, C.
2009-12-01
A small but growing number of scientists are beginning to harness Web 2.0 technologies, such as wikis, blogs, and social tagging, as a transformative way of doing science. These technologies provide researchers easy mechanisms to critique, suggest and share ideas, data and algorithms. At the same time, large suites of algorithms for science analysis are being made available as remotely-invokable Web Services, which can be chained together to create analysis workflows. This provides the research community an unprecedented opportunity to collaborate by sharing their workflows with one another, reproducing and analyzing research results, and leveraging colleagues’ expertise to expedite the process of scientific discovery. However, wikis and similar technologies are limited to text, static images and hyperlinks, providing little support for collaborative data analysis. A team of information technology and Earth science researchers from multiple institutions have come together to improve community collaboration in science analysis by developing a customizable “software appliance” to build collaborative portals for Earth Science services and analysis workflows. The critical requirement is that researchers (not just information technologists) be able to build collaborative sites around service workflows within a few hours. We envision online communities coming together, much like Finnish “talkoot” (a barn raising), to build a shared research space. Talkoot extends a freely available, open source content management framework with a series of modules specific to Earth Science for registering, creating, managing, discovering, tagging and sharing Earth Science web services and workflows for science data processing, analysis and visualization. Users will be able to author a “science story” in shareable web notebooks, including plots or animations, backed up by an executable workflow that directly reproduces the science analysis. New services and workflows of interest will be discoverable using tag search, and advertised using “service casts” and “interest casts” (Atom feeds). Multiple science workflow systems will be plugged into the system, with initial support for UAH’s Mining Workflow Composer and the open-source Active BPEL engine, and JPL’s SciFlo engine and the VizFlow visual programming interface. With the ability to share and execute analysis workflows, Talkoot portals can be used to do collaborative science in addition to communicate ideas and results. It will be useful for different science domains, mission teams, research projects and organizations. Thus, it will help to solve the “sociological” problem of bringing together disparate groups of researchers, and the technical problem of advertising, discovering, developing, documenting, and maintaining inter-agency science workflows. The presentation will discuss the goals of and barriers to Science 2.0, the social web technologies employed in the Talkoot software appliance (e.g. CMS, social tagging, personal presence, advertising by feeds, etc.), illustrate the resulting collaborative capabilities, and show early prototypes of the web interfaces (e.g. embedded workflows).
Adaptive Spot Detection With Optimal Scale Selection in Fluorescence Microscopy Images.
Basset, Antoine; Boulanger, Jérôme; Salamero, Jean; Bouthemy, Patrick; Kervrann, Charles
2015-11-01
Accurately detecting subcellular particles in fluorescence microscopy is of primary interest for further quantitative analysis such as counting, tracking, or classification. Our primary goal is to segment vesicles likely to share nearly the same size in fluorescence microscopy images. Our method termed adaptive thresholding of Laplacian of Gaussian (LoG) images with autoselected scale (ATLAS) automatically selects the optimal scale corresponding to the most frequent spot size in the image. Four criteria are proposed and compared to determine the optimal scale in a scale-space framework. Then, the segmentation stage amounts to thresholding the LoG of the intensity image. In contrast to other methods, the threshold is locally adapted given a probability of false alarm (PFA) specified by the user for the whole set of images to be processed. The local threshold is automatically derived from the PFA value and local image statistics estimated in a window whose size is not a critical parameter. We also propose a new data set for benchmarking, consisting of six collections of one hundred images each, which exploits backgrounds extracted from real microscopy images. We have carried out an extensive comparative evaluation on several data sets with ground-truth, which demonstrates that ATLAS outperforms existing methods. ATLAS does not need any fine parameter tuning and requires very low computation time. Convincing results are also reported on real total internal reflection fluorescence microscopy images.
Full-Body CT Scans - What You Need to Know
... Medical Imaging Medical X-ray Imaging Full-Body CT Scans - What You Need to Know Share Tweet ... new service for health-conscious people: "Whole-body CT screening." This typically involves scanning the body from ...
Progress in Evaluating Quantitative Optical Gas Imaging
Development of advanced fugitive emission detection and assessment technologies that facilitate cost effective leak and malfunction mitigation strategies is an ongoing goal shared by industry, regulators, and environmental groups. Optical gas imaging (OGI) represents an importan...
SSBD: a database of quantitative data of spatiotemporal dynamics of biological phenomena
Tohsato, Yukako; Ho, Kenneth H. L.; Kyoda, Koji; Onami, Shuichi
2016-01-01
Motivation: Rapid advances in live-cell imaging analysis and mathematical modeling have produced a large amount of quantitative data on spatiotemporal dynamics of biological objects ranging from molecules to organisms. There is now a crucial need to bring these large amounts of quantitative biological dynamics data together centrally in a coherent and systematic manner. This will facilitate the reuse of this data for further analysis. Results: We have developed the Systems Science of Biological Dynamics database (SSBD) to store and share quantitative biological dynamics data. SSBD currently provides 311 sets of quantitative data for single molecules, nuclei and whole organisms in a wide variety of model organisms from Escherichia coli to Mus musculus. The data are provided in Biological Dynamics Markup Language format and also through a REST API. In addition, SSBD provides 188 sets of time-lapse microscopy images from which the quantitative data were obtained and software tools for data visualization and analysis. Availability and Implementation: SSBD is accessible at http://ssbd.qbic.riken.jp. Contact: sonami@riken.jp PMID:27412095
SSBD: a database of quantitative data of spatiotemporal dynamics of biological phenomena.
Tohsato, Yukako; Ho, Kenneth H L; Kyoda, Koji; Onami, Shuichi
2016-11-15
Rapid advances in live-cell imaging analysis and mathematical modeling have produced a large amount of quantitative data on spatiotemporal dynamics of biological objects ranging from molecules to organisms. There is now a crucial need to bring these large amounts of quantitative biological dynamics data together centrally in a coherent and systematic manner. This will facilitate the reuse of this data for further analysis. We have developed the Systems Science of Biological Dynamics database (SSBD) to store and share quantitative biological dynamics data. SSBD currently provides 311 sets of quantitative data for single molecules, nuclei and whole organisms in a wide variety of model organisms from Escherichia coli to Mus musculus The data are provided in Biological Dynamics Markup Language format and also through a REST API. In addition, SSBD provides 188 sets of time-lapse microscopy images from which the quantitative data were obtained and software tools for data visualization and analysis. SSBD is accessible at http://ssbd.qbic.riken.jp CONTACT: sonami@riken.jp. © The Author 2016. Published by Oxford University Press.
Open source tools for standardized privacy protection of medical images
NASA Astrophysics Data System (ADS)
Lien, Chung-Yueh; Onken, Michael; Eichelberg, Marco; Kao, Tsair; Hein, Andreas
2011-03-01
In addition to the primary care context, medical images are often useful for research projects and community healthcare networks, so-called "secondary use". Patient privacy becomes an issue in such scenarios since the disclosure of personal health information (PHI) has to be prevented in a sharing environment. In general, most PHIs should be completely removed from the images according to the respective privacy regulations, but some basic and alleviated data is usually required for accurate image interpretation. Our objective is to utilize and enhance these specifications in order to provide reliable software implementations for de- and re-identification of medical images suitable for online and offline delivery. DICOM (Digital Imaging and Communications in Medicine) images are de-identified by replacing PHI-specific information with values still being reasonable for imaging diagnosis and patient indexing. In this paper, this approach is evaluated based on a prototype implementation built on top of the open source framework DCMTK (DICOM Toolkit) utilizing standardized de- and re-identification mechanisms. A set of tools has been developed for DICOM de-identification that meets privacy requirements of an offline and online sharing environment and fully relies on standard-based methods.
Expedition Two crew share dessert in Zvezda module
2001-06-10
ISS002-E-6534 (10 June 2001) --- Expedition Two crewmembers Yury V. Usachev (left), mission commander, James S. Voss, flight engineer, and Susan J. Helms, flight engineer, share a dessert in the Zvezda Service Module. Usachev represents Rosaviakosmos. The image was recorded with a digital still camera.
Quantitative Comparison of the Variability in Observed and Simulated Shortwave Reflectance
NASA Technical Reports Server (NTRS)
Roberts, Yolanda, L.; Pilewskie, P.; Kindel, B. C.; Feldman, D. R.; Collins, W. D.
2013-01-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) is a climate observation system that has been designed to monitor the Earth's climate with unprecedented absolute radiometric accuracy and SI traceability. Climate Observation System Simulation Experiments (OSSEs) have been generated to simulate CLARREO hyperspectral shortwave imager measurements to help define the measurement characteristics needed for CLARREO to achieve its objectives. To evaluate how well the OSSE-simulated reflectance spectra reproduce the Earth s climate variability at the beginning of the 21st century, we compared the variability of the OSSE reflectance spectra to that of the reflectance spectra measured by the Scanning Imaging Absorption Spectrometer for Atmospheric Cartography (SCIAMACHY). Principal component analysis (PCA) is a multivariate decomposition technique used to represent and study the variability of hyperspectral radiation measurements. Using PCA, between 99.7%and 99.9%of the total variance the OSSE and SCIAMACHY data sets can be explained by subspaces defined by six principal components (PCs). To quantify how much information is shared between the simulated and observed data sets, we spectrally decomposed the intersection of the two data set subspaces. The results from four cases in 2004 showed that the two data sets share eight (January and October) and seven (April and July) dimensions, which correspond to about 99.9% of the total SCIAMACHY variance for each month. The spectral nature of these shared spaces, understood by examining the transformed eigenvectors calculated from the subspace intersections, exhibit similar physical characteristics to the original PCs calculated from each data set, such as water vapor absorption, vegetation reflectance, and cloud reflectance.
Traumatic Brain Injury Diffusion Magnetic Resonance Imaging Research Roadmap Development Project
2011-10-01
promising technology on the horizon is the Diffusion Tensor Imaging ( DTI ). Diffusion tensor imaging ( DTI ) is a magnetic resonance imaging (MRI)-based...in the brain. The potential for DTI to improve our understanding of TBI has not been fully explored and challenges associated with non-existent...processing tools, quality control standards, and a shared image repository. The recommendations will be disseminated and pilot tested. A DTI of TBI
Start Your Search Engines. Part 2: When Image is Everything, Here are Some Great Ways to Find One
ERIC Educational Resources Information Center
Adam, Anna; Mowers, Helen
2008-01-01
There is no doubt that Google is great for finding images. Simply head to its home page, click the "Images" link, enter criteria in the search box, and--voila! In this article, the authors share some of their other favorite search engines for finding images. To make sure the desired images are available for educational use, consider searching for…
Goodman, Anna; Johnson, Rob; Aldred, Rachel; Brage, Soren; Bhalla, Kavi; Woodcock, James
2018-01-01
Background Street imagery is a promising and growing big data source providing current and historical images in more than 100 countries. Studies have reported using this data to audit road infrastructure and other built environment features. Here we explore a novel application, using Google Street View (GSV) to predict travel patterns at the city level. Methods We sampled 34 cities in Great Britain. In each city, we accessed 2000 GSV images from 1000 random locations. We selected archived images from time periods overlapping with the 2011 Census and the 2011–2013 Active People Survey (APS). We manually annotated the images into seven categories of road users. We developed regression models with the counts of images of road users as predictors. The outcomes included Census-reported commute shares of four modes (combined walking plus public transport, cycling, motorcycle, and car), as well as APS-reported past-month participation in walking and cycling. Results We found high correlations between GSV counts of cyclists (‘GSV-cyclists’) and cycle commute mode share (r = 0.92)/past-month cycling (r = 0.90). Likewise, GSV-pedestrians was moderately correlated with past-month walking for transport (r = 0.46), GSV-motorcycles was moderately correlated with commute share of motorcycles (r = 0.44), and GSV-buses was highly correlated with commute share of walking plus public transport (r = 0.81). GSV-car was not correlated with car commute mode share (r = –0.12). However, in multivariable regression models, all outcomes were predicted well, except past-month walking. The prediction performance was measured using cross-validation analyses. GSV-buses and GSV-cyclists are the strongest predictors for most outcomes. Conclusions GSV images are a promising new big data source to predict urban mobility patterns. Predictive power was the greatest for those modes that varied the most (cycle and bus). With its ability to identify mode of travel and capture street activity often excluded in routinely carried out surveys, GSV has the potential to be complementary to new and traditional data. With half the world’s population covered by street imagery, and with up to 10 years historical data available in GSV, further testing across multiple settings is warranted both for cross-sectional and longitudinal assessments. PMID:29718953
Ubiquitous picture-rich content representation
NASA Astrophysics Data System (ADS)
Wang, Wiley; Dean, Jennifer; Muzzolini, Russ
2010-02-01
The amount of digital images taken by the average consumer is consistently increasing. People enjoy the convenience of storing and sharing their pictures through online (digital) and offline (traditional) media. A set of pictures can be uploaded to: online photo services, web blogs and social network websites. Alternatively, these images can be used to generate: prints, cards, photo books or other photo products. Through uploading and sharing, images are easily transferred from one format to another. And often, a different set of associated content (text, tags) is created across formats. For example, on his web blog, a user may journal his experiences of his recent travel; on his social network website, his friends tag and comment on the pictures; in his online photo album, some pictures are titled and keyword-tagged. When the user wants to tell a complete story, perhaps in a photo book, he must collect, across all formats: the pictures, writings and comments, etc. and organize them in a book format. The user has to arrange the content of his trip in each format. The arrangement, the associations between the images, tags, keywords and text, cannot be shared with other formats. In this paper, we propose a system that allows the content to be easily created and shared across various digital media formats. We define a uniformed data association structure to connect: images, documents, comments, tags, keywords and other data. This content structure allows the user to switch representation formats without reediting. The framework under each format can emphasize (display or hide) content elements based on preference. For example, a slide show view will emphasize the display of pictures with limited text; a blog view will display highlighted images and journal text; and the photo book will try to fit in all images and text content. In this paper, we will discuss the strategy to associate pictures with text content, so that it can naturally tell a story. We will also list sample solutions on different formats such as: picture view, blog view and photo book view.
Red, purple and pink: the colors of diffusion on pinterest.
Bakhshi, Saeideh; Gilbert, Eric
2015-01-01
Many lab studies have shown that colors can evoke powerful emotions and impact human behavior. Might these phenomena drive how we act online? A key research challenge for image-sharing communities is uncovering the mechanisms by which content spreads through the community. In this paper, we investigate whether there is link between color and diffusion. Drawing on a corpus of one million images crawled from Pinterest, we find that color significantly impacts the diffusion of images and adoption of content on image sharing communities such as Pinterest, even after partially controlling for network structure and activity. Specifically, Red, Purple and pink seem to promote diffusion, while Green, Blue, Black and Yellow suppress it. To our knowledge, our study is the first to investigate how colors relate to online user behavior. In addition to contributing to the research conversation surrounding diffusion, these findings suggest future work using sophisticated computer vision techniques. We conclude with a discussion on the theoretical, practical and design implications suggested by this work-e.g. design of engaging image filters.
Display Sharing: An Alternative Paradigm
NASA Technical Reports Server (NTRS)
Brown, Michael A.
2010-01-01
The current Johnson Space Center (JSC) Mission Control Center (MCC) Video Transport System (VTS) provides flight controllers and management the ability to meld raw video from various sources with telemetry to improve situational awareness. However, maintaining a separate infrastructure for video delivery and integration of video content with data adds significant complexity and cost to the system. When considering alternative architectures for a VTS, the current system's ability to share specific computer displays in their entirety to other locations, such as large projector systems, flight control rooms, and back supporting rooms throughout the facilities and centers must be incorporated into any new architecture. Internet Protocol (IP)-based systems also support video delivery and integration. IP-based systems generally have an advantage in terms of cost and maintainability. Although IP-based systems are versatile, the task of sharing a computer display from one workstation to another can be time consuming for an end-user and inconvenient to administer at a system level. The objective of this paper is to present a prototype display sharing enterprise solution. Display sharing is a system which delivers image sharing across the LAN while simultaneously managing bandwidth, supporting encryption, enabling recovery and resynchronization following a loss of signal, and, minimizing latency. Additional critical elements will include image scaling support, multi -sharing, ease of initial integration and configuration, integration with desktop window managers, collaboration tools, host and recipient controls. This goal of this paper is to summarize the various elements of an IP-based display sharing system that can be used in today's control center environment.
Report on the ''ESO Python Boot Camp — Pilot Version''
NASA Astrophysics Data System (ADS)
Dias, B.; Milli, J.
2017-03-01
The Python programming language is becoming very popular within the astronomical community. Python is a high-level language with multiple applications including database management, handling FITS images and tables, statistical analysis, and more advanced topics. Python is a very powerful tool both for astronomical publications and for observatory operations. Since the best way to learn a new programming language is through practice, we therefore organised a two-day hands-on workshop to share expertise among ESO colleagues. We report here the outcome and feedback from this pilot event.
Systems of Geo Positioning of the Mobile Robot
NASA Astrophysics Data System (ADS)
Momot, M. V.; Proskokov, A. V.; Nesteruk, D. N.; Ganiyev, M.; Biktimirov, A. S.
2017-07-01
Article is devoted to the analysis of opportunities of electronic instruments, such as a gyroscope, the accelerometer, the magnetometer together, the video system of image identification and system of infrared indicators during creation of system of exact positioning of the mobile robot. Results of testing and the operating algorithms are given. Possibilities of sharing of these devices and their association in a single system are analyzed. Conclusions on development of opportunities and elimination of shortcomings of the received end-to-end system of positioning of the robot are drawn.
Networks for image acquisition, processing and display
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.
1990-01-01
The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.
Integrating Robotic Observatories into Astronomy Labs
NASA Astrophysics Data System (ADS)
Ruch, Gerald T.
2015-01-01
The University of St. Thomas (UST) and a consortium of five local schools is using the UST Robotic Observatory, housing a 17' telescope, to develop labs and image processing tools that allow easy integration of observational labs into existing introductory astronomy curriculum. Our lab design removes the burden of equipment ownership by sharing access to a common resource and removes the burden of data processing by automating processing tasks that are not relevant to the learning objectives.Each laboratory exercise takes place over two lab periods. During period one, students design and submit observation requests via the lab website. Between periods, the telescope automatically acquires the data and our image processing pipeline produces data ready for student analysis. During period two, the students retrieve their data from the website and perform the analysis. The first lab, 'Weighing Jupiter,' was successfully implemented at UST and several of our partner schools. We are currently developing a second lab to measure the age of and distance to a globular cluster.
Superpixel edges for boundary detection
Moya, Mary M.; Koch, Mark W.
2016-07-12
Various embodiments presented herein relate to identifying one or more edges in a synthetic aperture radar (SAR) image comprising a plurality of superpixels. Superpixels sharing an edge (or boundary) can be identified and one or more properties of the shared superpixels can be compared to determine whether the superpixels form the same or two different features. Where the superpixels form the same feature the edge is identified as an internal edge. Where the superpixels form two different features, the edge is identified as an external edge. Based upon classification of the superpixels, the external edge can be further determined to form part of a roof, wall, etc. The superpixels can be formed from a speckle-reduced SAR image product formed from a registered stack of SAR images, which is further segmented into a plurality of superpixels. The edge identification process is applied to the SAR image comprising the superpixels and edges.
The interrelationship between orthorexia nervosa, perfectionism, body image and attachment style.
Barnes, Marta A; Caltabiano, Marie L
2017-03-01
We investigated whether perfectionism, body image, attachment style, and self-esteem are predictors of orthorexia nervosa. A cohort of 220 participants completed a self-administered, online questionnaire consisting of five measures: ORTO-15, the Multidimensional Perfectionism Scale (MPS), the Multidimensional Body-Self Relations Questionnaire-Appearance Scale (MBSRQ-AS), the Relationship Scales Questionnaire (RSQ), and Rosenberg's Self-Esteem Scale (RSES). Correlation analysis revealed that higher orthorexic tendencies significantly correlated with higher scores for perfectionism (self-oriented, others-oriented and socially prescribed), appearance orientation, overweight preoccupation, self-classified weight, and fearful and dismissing attachment styles. Higher orthorexic tendencies also correlated with lower scores for body areas satisfaction and a secure attachment style. There was no significant correlation between orthorexia nervosa and self-esteem. Multiple linear regression analysis revealed that overweight preoccupation, appearance orientation and the presence of an eating disorder history were significant predictors of orthorexia nervosa with a history of an eating disorder being the strongest predictor. Orthorexia nervosa shares similarities with anorexia nervosa and bulimia nervosa with regards to perfectionism, body image attitudes, and attachment style. In addition, a history of an eating disorder strongly predicts orthorexia nervosa. These findings suggest that these disorders might be on the same spectrum of disordered eating.
Bystrykh, L V; Vonck, J; van Bruggen, E F; van Beeumen, J; Samyn, B; Govorukhina, N I; Arfman, N; Duine, J A; Dijkhuizen, L
1993-01-01
The quaternary protein structure of two methanol:N,N'-dimethyl-4-nitrosoaniline (NDMA) oxidoreductases purified from Amycolatopsis methanolica and Mycobacterium gastri MB19 was analyzed by electron microscopy and image processing. The enzymes are decameric proteins (displaying fivefold symmetry) with estimated molecular masses of 490 to 500 kDa based on their subunit molecular masses of 49 to 50 kDa. Both methanol:NDMA oxidoreductases possess a tightly but noncovalently bound NADP(H) cofactor at an NADPH-to-subunit molar ratio of 0.7. These cofactors are redox active toward alcohol and aldehyde substrates. Both enzymes contain significant amounts of Zn2+ and Mg2+ ions. The primary amino acid sequences of the A. methanolica and M. gastri MB19 methanol:NDMA oxidoreductases share a high degree of identity, as indicated by N-terminal sequence analysis (63% identity among the first 27 N-terminal amino acids), internal peptide sequence analysis, and overall amino acid composition. The amino acid sequence analysis also revealed significant similarity to a decameric methanol dehydrogenase of Bacillus methanolicus C1. Images PMID:8449887
The use of vision-based image quality metrics to predict low-light performance of camera phones
NASA Astrophysics Data System (ADS)
Hultgren, B.; Hertel, D.
2010-01-01
Small digital camera modules such as those in mobile phones have become ubiquitous. Their low-light performance is of utmost importance since a high percentage of images are made under low lighting conditions where image quality failure may occur due to blur, noise, and/or underexposure. These modes of image degradation are not mutually exclusive: they share common roots in the physics of the imager, the constraints of image processing, and the general trade-off situations in camera design. A comprehensive analysis of failure modes is needed in order to understand how their interactions affect overall image quality. Low-light performance is reported for DSLR, point-and-shoot, and mobile phone cameras. The measurements target blur, noise, and exposure error. Image sharpness is evaluated from three different physical measurements: static spatial frequency response, handheld motion blur, and statistical information loss due to image processing. Visual metrics for sharpness, graininess, and brightness are calculated from the physical measurements, and displayed as orthogonal image quality metrics to illustrate the relative magnitude of image quality degradation as a function of subject illumination. The impact of each of the three sharpness measurements on overall sharpness quality is displayed for different light levels. The power spectrum of the statistical information target is a good representation of natural scenes, thus providing a defined input signal for the measurement of power-spectrum based signal-to-noise ratio to characterize overall imaging performance.
The Enemy at Home: Images of Addiction in American Society.
ERIC Educational Resources Information Center
Statman, James M.
1993-01-01
Notes that much of American public, political leadership, and service providers share marked denial of antecedents, dynamics, and consequences of dysfunctional drug use. Examines dynamics of this denial, describes popular images of drug use and drug users in American culture, and considers roots of these images in the underlying value systems of…
User-Driven Planning for Digital-Image Delivery
ERIC Educational Resources Information Center
Pisciotta, Henry; Halm, Michael J.; Dooris, Michael J.
2006-01-01
This article draws on two projects funded by the Andrew W. Mellon Foundation concerning the ways colleges and universities can support the legitimate sharing of digital learning resources for scholarly use. The 2001-03 Visual Image User Study (VIUS) assessed the scholarly needs of digital image users-faculty, staff, and students. That study led to…
An object-based storage model for distributed remote sensing images
NASA Astrophysics Data System (ADS)
Yu, Zhanwu; Li, Zhongmin; Zheng, Sheng
2006-10-01
It is very difficult to design an integrated storage solution for distributed remote sensing images to offer high performance network storage services and secure data sharing across platforms using current network storage models such as direct attached storage, network attached storage and storage area network. Object-based storage, as new generation network storage technology emerged recently, separates the data path, the control path and the management path, which solves the bottleneck problem of metadata existed in traditional storage models, and has the characteristics of parallel data access, data sharing across platforms, intelligence of storage devices and security of data access. We use the object-based storage in the storage management of remote sensing images to construct an object-based storage model for distributed remote sensing images. In the storage model, remote sensing images are organized as remote sensing objects stored in the object-based storage devices. According to the storage model, we present the architecture of a distributed remote sensing images application system based on object-based storage, and give some test results about the write performance comparison of traditional network storage model and object-based storage model.
Devi, B Pushpa; Singh, Kh Manglem; Roy, Sudipta
2016-01-01
This paper proposes a new watermarking algorithm based on the shuffled singular value decomposition and the visual cryptography for copyright protection of digital images. It generates the ownership and identification shares of the image based on visual cryptography. It decomposes the image into low and high frequency sub-bands. The low frequency sub-band is further divided into blocks of same size after shuffling it and then the singular value decomposition is applied to each randomly selected block. Shares are generated by comparing one of the elements in the first column of the left orthogonal matrix with its corresponding element in the right orthogonal matrix of the singular value decomposition of the block of the low frequency sub-band. The experimental results show that the proposed scheme clearly verifies the copyright of the digital images, and is robust to withstand several image processing attacks. Comparison with the other related visual cryptography-based algorithms reveals that the proposed method gives better performance. The proposed method is especially resilient against the rotation attack.
Distributed processing method for arbitrary view generation in camera sensor network
NASA Astrophysics Data System (ADS)
Tehrani, Mehrdad P.; Fujii, Toshiaki; Tanimoto, Masayuki
2003-05-01
Camera sensor network as a new advent of technology is a network that each sensor node can capture video signals, process and communicate them with other nodes. The processing task in this network is to generate arbitrary view, which can be requested from central node or user. To avoid unnecessary communication between nodes in camera sensor network and speed up the processing time, we have distributed the processing tasks between nodes. In this method, each sensor node processes part of interpolation algorithm to generate the interpolated image with local communication between nodes. The processing task in camera sensor network is ray-space interpolation, which is an object independent method and based on MSE minimization by using adaptive filtering. Two methods were proposed for distributing processing tasks, which are Fully Image Shared Decentralized Processing (FIS-DP), and Partially Image Shared Decentralized Processing (PIS-DP), to share image data locally. Comparison of the proposed methods with Centralized Processing (CP) method shows that PIS-DP has the highest processing speed after FIS-DP, and CP has the lowest processing speed. Communication rate of CP and PIS-DP is almost same and better than FIS-DP. So, PIS-DP is recommended because of its better performance than CP and FIS-DP.
Telehealth solutions to enable global collaboration in rheumatic heart disease screening.
Lopes, Eduardo Lv; Beaton, Andrea Z; Nascimento, Bruno R; Tompsett, Alison; Dos Santos, Julia Pa; Perlman, Lindsay; Diamantino, Adriana C; Oliveira, Kaciane Kb; Oliveira, Cassio M; Nunes, Maria do Carmo P; Bonisson, Leonardo; Ribeiro, Antônio Lp; Sable, Craig
2018-02-01
Background The global burden of rheumatic heart disease is nearly 33 million people. Telemedicine, using cloud-server technology, provides an ideal solution for sharing images performed by non-physicians with cardiologists who are experts in rheumatic heart disease. Objective We describe our experience in using telemedicine to support a large rheumatic heart disease outreach screening programme in the Brazilian state of Minas Gerais. Methods The Programa de Rastreamento da Valvopatia Reumática (PROVAR) is a prospective cross-sectional study aimed at gathering epidemiological data on the burden of rheumatic heart disease in Minas Gerais and testing of a non-expert, telemedicine-supported model of outreach rheumatic heart disease screening. The primary goal is to enable expert support of remote rheumatic heart disease outreach through cloud-based sharing of echocardiographic images between Minas Gerais and Washington. Secondary goals include (a) developing and sharing online training modules for non-physicians in echocardiography performance and interpretation and (b) utilising a secure web-based system to share clinical and research data. Results PROVAR included 4615 studies that were performed by non-experts at 21 schools and shared via cloud-telemedicine technology. Latent rheumatic heart disease was found in 251 subjects (4.2% of subjects: 3.7% borderline and 0.5% definite disease). Of the studies, 50% were preformed on full functional echocardiography machines and transmitted via Digital Imaging and Communications in Medicine (DICOM) and 50% were performed on handheld echocardiography machines and transferred via a secure Dropbox connection. The average time between study performance date and interpretation was 10 days. There was 100% success in initial image transfer. Less than 1% of studies performed by non-experts could not be interpreted. Discussion A sustainable, low-cost telehealth model, using task-shifting with non-medical personal in low and middle income countries can improve access to echocardiography for rheumatic heart disease.
Tighe, Boden; Dunn, Matthew; McKay, Fiona H; Piatkowski, Timothy
2017-07-21
There is good evidence to suggest that performance and image enhancing drug (PIED) use is increasing in Australia and that there is an increase in those using PIEDs who have never used another illicit substance. Peers have always been an important source of information in this group, though the rise of the Internet, and the increased use of Internet forums amongst substance consumers to share harm reduction information, means that PIED users may have access to a large array of views and opinions. The aim of this study was to explore the type of information that PIED users seek and share on these forums. An online search was conducted to identify online forums that discussed PIED use. Three discussion forums were included in this study: aussiegymjunkies.com, bodybuildingforums.com.au, and brotherhoodofpain.com. The primary source of data for this study was the 'threads' from the online forums. Threads were thematically analysed for overall content, leading to the identification of themes. One hundred thirty-four threads and 1716 individual posts from 450 unique avatars were included in this analysis. Two themes were identified: (1) personal experiences and advice and (2) referral to services and referral to the scientific literature. Internet forums are an accessible way for members of the PIED community to seek and share information to reduce the harms associated with PIED use. Forum members show concern for both their own and others' use and, where they lack information, will recommend seeking information from medical professionals. Anecdotal evidence is given high credence though the findings from the scientific literature are used to support opinions. The engagement of health professionals within forums could prove a useful strategy for engaging with this population to provide harm reduction interventions, particularly as forum members are clearly seeking further reliable information, and peers may act as a conduit between users and the health and medical profession.
Rosenkrantz, Andrew B; Duszak, Richard
2018-03-01
The purpose of this study was to explore associations between CT and MRI utilization and cost savings achieved by Medicare Shared Savings Program (MSSP)-participating accountable care organizations (ACOs). Summary data were obtained for all MSSP-participating ACOs (n = 214 in 2013; n = 333 in 2014). Multivariable regressions were performed to assess associations of CT and MRI utilization with ACOs' total savings and reaching minimum savings rates to share in Medicare savings. In 2014, 54.4% of ACOs achieved savings, meeting minimum rates to share in savings in 27.6%. Independent positive predictors of total savings included beneficiary risk scores (β = +20,265,720, P = .003) and MRI events (β = +19,964, P = .018) but not CT events (β = +2,084, P = .635). Independent positive predictors of meeting minimum savings rates included beneficiary risk scores (odds ratio = 2108, P = .001) and MRI events (odds ratio = 1.008, P = .002), but not CT events (odds ratio = 1.002, P = .289). Measures not independently associated with savings were total beneficiaries; beneficiaries' gender, age, race or ethnicity; and Medicare enrollment type (P > .05). For ACOs with 2013 and 2014 data, neither increases nor decreases in CT and MRI events between years were associated with 2014 total savings or meeting savings thresholds (P ≥ .466). Higher MRI utilization rates were independently associated with small but significant MSSP ACO savings. The value of MRI might relate to the favorable impact of appropriate advanced imaging utilization on downstream outcomes and other resource utilization. Because MSSP ACOs represent a highly select group of sophisticated organizations subject to rigorous quality and care coordination standards, further research will be necessary to determine if these associations are generalizable to other health care settings. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Shared liking and association valence for representational art but not abstract art.
Schepman, Astrid; Rodway, Paul; Pullen, Sarah J; Kirkham, Julie
2015-01-01
We examined the finding that aesthetic evaluations are more similar across observers for representational images than for abstract images. It has been proposed that a difference in convergence of observers' tastes is due to differing levels of shared semantic associations (Vessel & Rubin, 2010). In Experiment 1, student participants rated 20 representational and 20 abstract artworks. We found that their judgments were more similar for representational than abstract artworks. In Experiment 2, we replicated this finding, and also found that valence ratings given to associations and meanings provided in response to the artworks converged more across observers for representational than for abstract art. Our empirical work provides insight into processes that may underlie the observation that taste for representational art is shared across individual observers, while taste for abstract art is more idiosyncratic.
2011-01-01
Background Massive datasets comprising high-resolution images, generated in neuro-imaging studies and in clinical imaging research, are increasingly challenging our ability to analyze, share, and filter such images in clinical and basic translational research. Pivot collection exploratory analysis provides each user the ability to fully interact with the massive amounts of visual data to fully facilitate sufficient sorting, flexibility and speed to fluidly access, explore or analyze the massive image data sets of high-resolution images and their associated meta information, such as neuro-imaging databases from the Allen Brain Atlas. It is used in clustering, filtering, data sharing and classifying of the visual data into various deep zoom levels and meta information categories to detect the underlying hidden pattern within the data set that has been used. Method We deployed prototype Pivot collections using the Linux CentOS running on the Apache web server. We also tested the prototype Pivot collections on other operating systems like Windows (the most common variants) and UNIX, etc. It is demonstrated that the approach yields very good results when compared with other approaches used by some researchers for generation, creation, and clustering of massive image collections such as the coronal and horizontal sections of the mouse brain from the Allen Brain Atlas. Results Pivot visual analytics was used to analyze a prototype of dataset Dab2 co-expressed genes from the Allen Brain Atlas. The metadata along with high-resolution images were automatically extracted using the Allen Brain Atlas API. It is then used to identify the hidden information based on the various categories and conditions applied by using options generated from automated collection. A metadata category like chromosome, as well as data for individual cases like sex, age, and plan attributes of a particular gene, is used to filter, sort and to determine if there exist other genes with a similar characteristics to Dab2. And online access to the mouse brain pivot collection can be viewed using the link http://edtech-dev.uthsc.edu/CTSI/teeDev1/unittest/PaPa/collection.html (user name: tviangte and password: demome) Conclusions Our proposed algorithm has automated the creation of large image Pivot collections; this will enable investigators of clinical research projects to easily and quickly analyse the image collections through a perspective that is useful for making critical decisions about the image patterns discovered. PMID:21884637
Integration of DICOM and openEHR standards
NASA Astrophysics Data System (ADS)
Wang, Ying; Yao, Zhihong; Liu, Lei
2011-03-01
The standard format for medical imaging storage and transmission is DICOM. openEHR is an open standard specification in health informatics that describes the management and storage, retrieval and exchange of health data in electronic health records. Considering that the integration of DICOM and openEHR is beneficial to information sharing, on the basis of XML-based DICOM format, we developed a method of creating a DICOM Imaging Archetype in openEHR to enable the integration of DICOM and openEHR. Each DICOM file contains abundant imaging information. However, because reading a DICOM involves looking up the DICOM Data Dictionary, the readability of a DICOM file has been limited. openEHR has innovatively adopted two level modeling method, making clinical information divided into lower level, the information model, and upper level, archetypes and templates. But one critical challenge posed to the development of openEHR is the information sharing problem, especially in imaging information sharing. For example, some important imaging information cannot be displayed in an openEHR file. In this paper, to enhance the readability of a DICOM file and semantic interoperability of an openEHR file, we developed a method of mapping a DICOM file to an openEHR file by adopting the form of archetype defined in openEHR. Because an archetype has a tree structure, after mapping a DICOM file to an openEHR file, the converted information is structuralized in conformance with openEHR format. This method enables the integration of DICOM and openEHR and data exchange without losing imaging information between two standards.
Wu, Hui-Qun; Lv, Zheng-Min; Geng, Xing-Yun; Jiang, Kui; Tang, Le-Min; Zhou, Guo-Min; Dong, Jian-Cheng
2013-01-01
To address issues in interoperability between different fundus image systems, we proposed a web eye-picture archiving and communication system (PACS) framework in conformance with digital imaging and communication in medicine (DICOM) and health level 7 (HL7) protocol to realize fundus images and reports sharing and communication through internet. Firstly, a telemedicine-based eye care work flow was established based on integrating the healthcare enterprise (IHE) Eye Care technical framework. Then, a browser/server architecture eye-PACS system was established in conformance with the web access to DICOM persistent object (WADO) protocol, which contains three tiers. In any client system installed with web browser, clinicians could log in the eye-PACS to observe fundus images and reports. Multipurpose internet mail extensions (MIME) type of a structured report is saved as pdf/html with reference link to relevant fundus image using the WADO syntax could provide enough information for clinicians. Some functions provided by open-source Oviyam could be used to query, zoom, move, measure, view DICOM fundus images. Such web eye-PACS in compliance to WADO protocol could be used to store and communicate fundus images and reports, therefore is of great significance for teleophthalmology.
EDGE3: A web-based solution for management and analysis of Agilent two color microarray experiments
Vollrath, Aaron L; Smith, Adam A; Craven, Mark; Bradfield, Christopher A
2009-01-01
Background The ability to generate transcriptional data on the scale of entire genomes has been a boon both in the improvement of biological understanding and in the amount of data generated. The latter, the amount of data generated, has implications when it comes to effective storage, analysis and sharing of these data. A number of software tools have been developed to store, analyze, and share microarray data. However, a majority of these tools do not offer all of these features nor do they specifically target the commonly used two color Agilent DNA microarray platform. Thus, the motivating factor for the development of EDGE3 was to incorporate the storage, analysis and sharing of microarray data in a manner that would provide a means for research groups to collaborate on Agilent-based microarray experiments without a large investment in software-related expenditures or extensive training of end-users. Results EDGE3 has been developed with two major functions in mind. The first function is to provide a workflow process for the generation of microarray data by a research laboratory or a microarray facility. The second is to store, analyze, and share microarray data in a manner that doesn't require complicated software. To satisfy the first function, EDGE3 has been developed as a means to establish a well defined experimental workflow and information system for microarray generation. To satisfy the second function, the software application utilized as the user interface of EDGE3 is a web browser. Within the web browser, a user is able to access the entire functionality, including, but not limited to, the ability to perform a number of bioinformatics based analyses, collaborate between research groups through a user-based security model, and access to the raw data files and quality control files generated by the software used to extract the signals from an array image. Conclusion Here, we present EDGE3, an open-source, web-based application that allows for the storage, analysis, and controlled sharing of transcription-based microarray data generated on the Agilent DNA platform. In addition, EDGE3 provides a means for managing RNA samples and arrays during the hybridization process. EDGE3 is freely available for download at . PMID:19732451
Vollrath, Aaron L; Smith, Adam A; Craven, Mark; Bradfield, Christopher A
2009-09-04
The ability to generate transcriptional data on the scale of entire genomes has been a boon both in the improvement of biological understanding and in the amount of data generated. The latter, the amount of data generated, has implications when it comes to effective storage, analysis and sharing of these data. A number of software tools have been developed to store, analyze, and share microarray data. However, a majority of these tools do not offer all of these features nor do they specifically target the commonly used two color Agilent DNA microarray platform. Thus, the motivating factor for the development of EDGE(3) was to incorporate the storage, analysis and sharing of microarray data in a manner that would provide a means for research groups to collaborate on Agilent-based microarray experiments without a large investment in software-related expenditures or extensive training of end-users. EDGE(3) has been developed with two major functions in mind. The first function is to provide a workflow process for the generation of microarray data by a research laboratory or a microarray facility. The second is to store, analyze, and share microarray data in a manner that doesn't require complicated software. To satisfy the first function, EDGE3 has been developed as a means to establish a well defined experimental workflow and information system for microarray generation. To satisfy the second function, the software application utilized as the user interface of EDGE(3) is a web browser. Within the web browser, a user is able to access the entire functionality, including, but not limited to, the ability to perform a number of bioinformatics based analyses, collaborate between research groups through a user-based security model, and access to the raw data files and quality control files generated by the software used to extract the signals from an array image. Here, we present EDGE(3), an open-source, web-based application that allows for the storage, analysis, and controlled sharing of transcription-based microarray data generated on the Agilent DNA platform. In addition, EDGE(3) provides a means for managing RNA samples and arrays during the hybridization process. EDGE(3) is freely available for download at http://edge.oncology.wisc.edu/.
Near Real-time Scientific Data Analysis and Visualization with the ArcGIS Platform
NASA Astrophysics Data System (ADS)
Shrestha, S. R.; Viswambharan, V.; Doshi, A.
2017-12-01
Scientific multidimensional data are generated from a variety of sources and platforms. These datasets are mostly produced by earth observation and/or modeling systems. Agencies like NASA, NOAA, USGS, and ESA produce large volumes of near real-time observation, forecast, and historical data that drives fundamental research and its applications in larger aspects of humanity from basic decision making to disaster response. A common big data challenge for organizations working with multidimensional scientific data and imagery collections is the time and resources required to manage and process such large volumes and varieties of data. The challenge of adopting data driven real-time visualization and analysis, as well as the need to share these large datasets, workflows, and information products to wider and more diverse communities, brings an opportunity to use the ArcGIS platform to handle such demand. In recent years, a significant effort has put in expanding the capabilities of ArcGIS to support multidimensional scientific data across the platform. New capabilities in ArcGIS to support scientific data management, processing, and analysis as well as creating information products from large volumes of data using the image server technology are becoming widely used in earth science and across other domains. We will discuss and share the challenges associated with big data by the geospatial science community and how we have addressed these challenges in the ArcGIS platform. We will share few use cases, such as NOAA High Resolution Refresh Radar (HRRR) data, that demonstrate how we access large collections of near real-time data (that are stored on-premise or on the cloud), disseminate them dynamically, process and analyze them on-the-fly, and serve them to a variety of geospatial applications. We will also share how on-the-fly processing using raster functions capabilities, can be extended to create persisted data and information products using raster analytics capabilities that exploit distributed computing in an enterprise environment.
Webb, Jennifer B; Vinoski, Erin R; Bonar, Adrienne S; Davies, Alexandria E; Etzel, Lena
2017-09-01
In step with the proliferation of Thinspiration and Fitspiration content disseminated in popular web-based media, the fat acceptance movement has garnered heightened visibility within mainstream culture via the burgeoning Fatosphere weblog community. The present study extended previous Fatosphere research by comparing the shared and distinct strategies used to represent and motivate a fat-accepting lifestyle among 400 images sourced from Fatspiration- and Health at Every Size ® -themed hashtags on Instagram. Images were systematically analyzed for the socio-demographic and body size attributes of the individuals portrayed alongside content reflecting dimensions of general fat acceptance, physical appearance pride, physical activity and health, fat shaming, and eating and weight loss-related themes. #fatspiration/#fatspo-tagged images more frequently promoted fat acceptance through fashion and beauty-related activism; #healthateverysize/#haes posts more often featured physically-active portrayals, holistic well-being, and weight stigma. Findings provide insight into the common and unique motivational factors and contradictory messages encountered in these fat-accepting social media communities. Copyright © 2017 Elsevier Ltd. All rights reserved.
Design and Evaluation of a Scalable and Reconfigurable Multi-Platform System for Acoustic Imaging
Izquierdo, Alberto; Villacorta, Juan José; del Val Puente, Lara; Suárez, Luis
2016-01-01
This paper proposes a scalable and multi-platform framework for signal acquisition and processing, which allows for the generation of acoustic images using planar arrays of MEMS (Micro-Electro-Mechanical Systems) microphones with low development and deployment costs. Acoustic characterization of MEMS sensors was performed, and the beam pattern of a module, based on an 8 × 8 planar array and of several clusters of modules, was obtained. A flexible framework, formed by an FPGA, an embedded processor, a computer desktop, and a graphic processing unit, was defined. The processing times of the algorithms used to obtain the acoustic images, including signal processing and wideband beamforming via FFT, were evaluated in each subsystem of the framework. Based on this analysis, three frameworks are proposed, defined by the specific subsystems used and the algorithms shared. Finally, a set of acoustic images obtained from sound reflected from a person are presented as a case study in the field of biometric identification. These results reveal the feasibility of the proposed system. PMID:27727174
Leaf-FISH: Microscale Imaging of Bacterial Taxa on Phyllosphere
Peredo, Elena L.; Simmons, Sheri L.
2018-01-01
Molecular methods for microbial community characterization have uncovered environmental and plant-associated factors shaping phyllosphere communities. Variables undetectable using bulk methods can play an important role in shaping plant-microbe interactions. Microscale analysis of bacterial dynamics in the phyllosphere requires imaging techniques specially adapted to the high autoflouresence and 3-D structure of the leaf surface. We present an easily-transferable method (Leaf-FISH) to generate high-resolution tridimensional images of leaf surfaces that allows simultaneous visualization of multiple bacterial taxa in a structurally informed context, using taxon-specific fluorescently labeled oligonucleotide probes. Using a combination of leaf pretreatments coupled with spectral imaging confocal microscopy, we demonstrate the successful imaging bacterial taxa at the genus level on cuticular and subcuticular leaf areas. Our results confirm that different bacterial species, including closely related isolates, colonize distinct microhabitats in the leaf. We demonstrate that highly related Methylobacterium species have distinct colonization patterns that could not be predicted by shared physiological traits, such as carbon source requirements or phytohormone production. High-resolution characterization of microbial colonization patterns is critical for an accurate understanding of microbe-microbe and microbe-plant interactions, and for the development of foliar bacteria as plant-protective agents. PMID:29375531
A data model and database for high-resolution pathology analytical image informatics.
Wang, Fusheng; Kong, Jun; Cooper, Lee; Pan, Tony; Kurc, Tahsin; Chen, Wenjin; Sharma, Ashish; Niedermayr, Cristobal; Oh, Tae W; Brat, Daniel; Farris, Alton B; Foran, David J; Saltz, Joel
2011-01-01
The systematic analysis of imaged pathology specimens often results in a vast amount of morphological information at both the cellular and sub-cellular scales. While microscopy scanners and computerized analysis are capable of capturing and analyzing data rapidly, microscopy image data remain underutilized in research and clinical settings. One major obstacle which tends to reduce wider adoption of these new technologies throughout the clinical and scientific communities is the challenge of managing, querying, and integrating the vast amounts of data resulting from the analysis of large digital pathology datasets. This paper presents a data model, which addresses these challenges, and demonstrates its implementation in a relational database system. This paper describes a data model, referred to as Pathology Analytic Imaging Standards (PAIS), and a database implementation, which are designed to support the data management and query requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines on whole-slide images and tissue microarrays (TMAs). (1) Development of a data model capable of efficiently representing and storing virtual slide related image, annotation, markup, and feature information. (2) Development of a database, based on the data model, capable of supporting queries for data retrieval based on analysis and image metadata, queries for comparison of results from different analyses, and spatial queries on segmented regions, features, and classified objects. The work described in this paper is motivated by the challenges associated with characterization of micro-scale features for comparative and correlative analyses involving whole-slides tissue images and TMAs. Technologies for digitizing tissues have advanced significantly in the past decade. Slide scanners are capable of producing high-magnification, high-resolution images from whole slides and TMAs within several minutes. Hence, it is becoming increasingly feasible for basic, clinical, and translational research studies to produce thousands of whole-slide images. Systematic analysis of these large datasets requires efficient data management support for representing and indexing results from hundreds of interrelated analyses generating very large volumes of quantifications such as shape and texture and of classifications of the quantified features. We have designed a data model and a database to address the data management requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines. The data model represents virtual slide related image, annotation, markup and feature information. The database supports a wide range of metadata and spatial queries on images, annotations, markups, and features. We currently have three databases running on a Dell PowerEdge T410 server with CentOS 5.5 Linux operating system. The database server is IBM DB2 Enterprise Edition 9.7.2. The set of databases consists of 1) a TMA database containing image analysis results from 4740 cases of breast cancer, with 641 MB storage size; 2) an algorithm validation database, which stores markups and annotations from two segmentation algorithms and two parameter sets on 18 selected slides, with 66 GB storage size; and 3) an in silico brain tumor study database comprising results from 307 TCGA slides, with 365 GB storage size. The latter two databases also contain human-generated annotations and markups for regions and nuclei. Modeling and managing pathology image analysis results in a database provide immediate benefits on the value and usability of data in a research study. The database provides powerful query capabilities, which are otherwise difficult or cumbersome to support by other approaches such as programming languages. Standardized, semantic annotated data representation and interfaces also make it possible to more efficiently share image data and analysis results.
Berquist, Rachel M.; Gledhill, Kristen M.; Peterson, Matthew W.; Doan, Allyson H.; Baxter, Gregory T.; Yopak, Kara E.; Kang, Ning; Walker, H. J.; Hastings, Philip A.; Frank, Lawrence R.
2012-01-01
Museum fish collections possess a wealth of anatomical and morphological data that are essential for documenting and understanding biodiversity. Obtaining access to specimens for research, however, is not always practical and frequently conflicts with the need to maintain the physical integrity of specimens and the collection as a whole. Non-invasive three-dimensional (3D) digital imaging therefore serves a critical role in facilitating the digitization of these specimens for anatomical and morphological analysis as well as facilitating an efficient method for online storage and sharing of this imaging data. Here we describe the development of the Digital Fish Library (DFL, http://www.digitalfishlibrary.org), an online digital archive of high-resolution, high-contrast, magnetic resonance imaging (MRI) scans of the soft tissue anatomy of an array of fishes preserved in the Marine Vertebrate Collection of Scripps Institution of Oceanography. We have imaged and uploaded MRI data for over 300 marine and freshwater species, developed a data archival and retrieval system with a web-based image analysis and visualization tool, and integrated these into the public DFL website to disseminate data and associated metadata freely over the web. We show that MRI is a rapid and powerful method for accurately depicting the in-situ soft-tissue anatomy of preserved fishes in sufficient detail for large-scale comparative digital morphology. However these 3D volumetric data require a sophisticated computational and archival infrastructure in order to be broadly accessible to researchers and educators. PMID:22493695
Berquist, Rachel M; Gledhill, Kristen M; Peterson, Matthew W; Doan, Allyson H; Baxter, Gregory T; Yopak, Kara E; Kang, Ning; Walker, H J; Hastings, Philip A; Frank, Lawrence R
2012-01-01
Museum fish collections possess a wealth of anatomical and morphological data that are essential for documenting and understanding biodiversity. Obtaining access to specimens for research, however, is not always practical and frequently conflicts with the need to maintain the physical integrity of specimens and the collection as a whole. Non-invasive three-dimensional (3D) digital imaging therefore serves a critical role in facilitating the digitization of these specimens for anatomical and morphological analysis as well as facilitating an efficient method for online storage and sharing of this imaging data. Here we describe the development of the Digital Fish Library (DFL, http://www.digitalfishlibrary.org), an online digital archive of high-resolution, high-contrast, magnetic resonance imaging (MRI) scans of the soft tissue anatomy of an array of fishes preserved in the Marine Vertebrate Collection of Scripps Institution of Oceanography. We have imaged and uploaded MRI data for over 300 marine and freshwater species, developed a data archival and retrieval system with a web-based image analysis and visualization tool, and integrated these into the public DFL website to disseminate data and associated metadata freely over the web. We show that MRI is a rapid and powerful method for accurately depicting the in-situ soft-tissue anatomy of preserved fishes in sufficient detail for large-scale comparative digital morphology. However these 3D volumetric data require a sophisticated computational and archival infrastructure in order to be broadly accessible to researchers and educators.
Horror Image Recognition Based on Context-Aware Multi-Instance Learning.
Li, Bing; Xiong, Weihua; Wu, Ou; Hu, Weiming; Maybank, Stephen; Yan, Shuicheng
2015-12-01
Horror content sharing on the Web is a growing phenomenon that can interfere with our daily life and affect the mental health of those involved. As an important form of expression, horror images have their own characteristics that can evoke extreme emotions. In this paper, we present a novel context-aware multi-instance learning (CMIL) algorithm for horror image recognition. The CMIL algorithm identifies horror images and picks out the regions that cause the sensation of horror in these horror images. It obtains contextual cues among adjacent regions in an image using a random walk on a contextual graph. Borrowing the strength of the fuzzy support vector machine (FSVM), we define a heuristic optimization procedure based on the FSVM to search for the optimal classifier for the CMIL. To improve the initialization of the CMIL, we propose a novel visual saliency model based on the tensor analysis. The average saliency value of each segmented region is set as its initial fuzzy membership in the CMIL. The advantage of the tensor-based visual saliency model is that it not only adaptively selects features, but also dynamically determines fusion weights for saliency value combination from different feature subspaces. The effectiveness of the proposed CMIL model is demonstrated by its use in horror image recognition on two large-scale image sets collected from the Internet.
NASA Astrophysics Data System (ADS)
Wang, Ximing; Documet, Jorge; Garrison, Kathleen A.; Winstein, Carolee J.; Liu, Brent
2012-02-01
Stroke is a major cause of adult disability. The Interdisciplinary Comprehensive Arm Rehabilitation Evaluation (I-CARE) clinical trial aims to evaluate a therapy for arm rehabilitation after stroke. A primary outcome measure is correlative analysis between stroke lesion characteristics and standard measures of rehabilitation progress, from data collected at seven research facilities across the country. Sharing and communication of brain imaging and behavioral data is thus a challenge for collaboration. A solution is proposed as a web-based system with tools supporting imaging and informatics related data. In this system, users may upload anonymized brain images through a secure internet connection and the system will sort the imaging data for storage in a centralized database. Users may utilize an annotation tool to mark up images. In addition to imaging informatics, electronic data forms, for example, clinical data forms, are also integrated. Clinical information is processed and stored in the database to enable future data mining related development. Tele-consultation is facilitated through the development of a thin-client image viewing application. For convenience, the system supports access through desktop PC, laptops, and iPAD. Thus, clinicians may enter data directly into the system via iPAD while working with participants in the study. Overall, this comprehensive imaging informatics system enables users to collect, organize and analyze stroke cases efficiently.
Gianni, Daniele; McKeever, Steve; Yu, Tommy; Britten, Randall; Delingette, Hervé; Frangi, Alejandro; Hunter, Peter; Smith, Nicolas
2010-06-28
Sharing and reusing anatomical models over the Web offers a significant opportunity to progress the investigation of cardiovascular diseases. However, the current sharing methodology suffers from the limitations of static model delivery (i.e. embedding static links to the models within Web pages) and of a disaggregated view of the model metadata produced by publications and cardiac simulations in isolation. In the context of euHeart--a research project targeting the description and representation of cardiovascular models for disease diagnosis and treatment purposes--we aim to overcome the above limitations with the introduction of euHeartDB, a Web-enabled database for anatomical models of the heart. The database implements a dynamic sharing methodology by managing data access and by tracing all applications. In addition to this, euHeartDB establishes a knowledge link with the physiome model repository by linking geometries to CellML models embedded in the simulation of cardiac behaviour. Furthermore, euHeartDB uses the exFormat--a preliminary version of the interoperable FieldML data format--to effectively promote reuse of anatomical models, and currently incorporates Continuum Mechanics, Image Analysis, Signal Processing and System Identification Graphical User Interface (CMGUI), a rendering engine, to provide three-dimensional graphical views of the models populating the database. Currently, euHeartDB stores 11 cardiac geometries developed within the euHeart project consortium.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-24
... such a demonstration by the importer will information, images, or samples be shared with the right... (see Sec. 133.21(b)(1) of this rule). This disclosure of information, which includes images... release to right holders information appearing on goods (and/or their retail packaging), and on images and...
Geometrical and optical calibration of a vehicle-mounted IR imager for land mine localization
NASA Astrophysics Data System (ADS)
Aitken, Victor C.; Russell, Kevin L.; McFee, John E.
2000-08-01
Many present day vehicle-mounted landmine detection systems use IR imagers. Information furnished by these imaging systems usually consists of video and the location of targets within the video. In multisensor systems employing data fusion, there is a need to convert sensor information to a common coordinate system that all sensors share.
ERIC Educational Resources Information Center
Bean, Robert
2007-01-01
In this article, the author shares the idea behind his photography exhibition called "Verbatim." "Verbatim" is comprised of digital images made with a flatbed scanner. The prints are "contact images" that remember and forget the earlier technological processes of photography and typewriting. Photography, typing, and phonographic writing…
Copyright and Collaborative Spaces: Open Licensing and Wikis
ERIC Educational Resources Information Center
Botterbusch, Hope R.; Parker, Preston
2008-01-01
As recently as ten years ago, it may have seemed like science fiction to imagine collaborative spaces on the Internet. Today, collaborative websites have proliferated: (1) blogs; (2) social networking; (3) image sharing; (4) video sharing; (5) open educational resources; and (6) popularity websites. Another type is called a wiki, an online…
Integrating TRENCADIS components in gLite to share DICOM medical images and structured reports.
Blanquer, Ignacio; Hernández, Vicente; Salavert, José; Segrelles, Damià
2010-01-01
The problem of sharing medical information among different centres has been tackled by many projects. Several of them target the specific problem of sharing DICOM images and structured reports (DICOM-SR), such as the TRENCADIS project. In this paper we propose sharing and organizing DICOM data and DICOM-SR metadata benefiting from the existent deployed Grid infrastructures compliant with gLite such as EGEE or the Spanish NGI. These infrastructures contribute with a large amount of storage resources for creating knowledge databases and also provide metadata storage resources (such as AMGA) to semantically organize reports in a tree-structure. First, in this paper, we present the extension of TRENCADIS architecture to use gLite components (LFC, AMGA, SE) on the shake of increasing interoperability. Using the metadata from DICOM-SR, and maintaining its tree structure, enables federating different but compatible diagnostic structures and simplifies the definition of complex queries. This article describes how to do this in AMGA and it shows an approach to efficiently code radiology reports to enable the multi-centre federation of data resources.
Reengineering Workflow for Curation of DICOM Datasets.
Bennett, William; Smith, Kirk; Jarosz, Quasar; Nolan, Tracy; Bosch, Walter
2018-06-15
Reusable, publicly available data is a pillar of open science and rapid advancement of cancer imaging research. Sharing data from completed research studies not only saves research dollars required to collect data, but also helps insure that studies are both replicable and reproducible. The Cancer Imaging Archive (TCIA) is a global shared repository for imaging data related to cancer. Insuring the consistency, scientific utility, and anonymity of data stored in TCIA is of utmost importance. As the rate of submission to TCIA has been increasing, both in volume and complexity of DICOM objects stored, the process of curation of collections has become a bottleneck in acquisition of data. In order to increase the rate of curation of image sets, improve the quality of the curation, and better track the provenance of changes made to submitted DICOM image sets, a custom set of tools was developed, using novel methods for the analysis of DICOM data sets. These tools are written in the programming language perl, use the open-source database PostgreSQL, make use of the perl DICOM routines in the open-source package Posda, and incorporate DICOM diagnostic tools from other open-source packages, such as dicom3tools. These tools are referred to as the "Posda Tools." The Posda Tools are open source and available via git at https://github.com/UAMS-DBMI/PosdaTools . In this paper, we briefly describe the Posda Tools and discuss the novel methods employed by these tools to facilitate rapid analysis of DICOM data, including the following: (1) use a database schema which is more permissive, and differently normalized from traditional DICOM databases; (2) perform integrity checks automatically on a bulk basis; (3) apply revisions to DICOM datasets on an bulk basis, either through a web-based interface or via command line executable perl scripts; (4) all such edits are tracked in a revision tracker and may be rolled back; (5) a UI is provided to inspect the results of such edits, to verify that they are what was intended; (6) identification of DICOM Studies, Series, and SOP instances using "nicknames" which are persistent and have well-defined scope to make expression of reported DICOM errors easier to manage; and (7) rapidly identify potential duplicate DICOM datasets by pixel data is provided; this can be used, e.g., to identify submission subjects which may relate to the same individual, without identifying the individual.
Recreational use in dispersed public lands measured using social media data and on-site counts.
Fisher, David M; Wood, Spencer A; White, Eric M; Blahna, Dale J; Lange, Sarah; Weinberg, Alex; Tomco, Michael; Lia, Emilia
2018-09-15
Outdoor recreation is one of many important benefits provided by public lands. Data on recreational use are critical for informing management of recreation resources, however, managers often lack actionable information on visitor use for large protected areas that lack controlled access points. The purpose of this study is to explore the potential for social media data (e.g., geotagged images shared on Flickr and trip reports shared on a hiking forum) to provide land managers with useful measures of recreational use to dispersed areas, and to provide lessons learned from comparing several more traditional counting methods. First, we measure daily and monthly visitation rates to individual trails within the Mount Baker-Snoqualmie National Forest (MBSNF) in western Washington. At 15 trailheads, we compare counts of hikers from infrared sensors, timelapse cameras, and manual on-site counts, to counts based on the number of shared geotagged images and trip reports from those locations. Second, we measure visitation rates to each National Forest System (NFS) unit across the US and compare annual measurements derived from the number of geotagged images to estimates from the US Forest Service National Visitor Use Monitoring Program. At both the NFS unit and the individual-trail scales, we found strong correlations between traditional measures of recreational use and measures based on user-generated content shared on the internet. For national forests in every region of the country, correlations between official Forest Service statistics and geotagged images ranged between 55% and 95%. For individual trails within the MBSNF, monthly visitor counts from on-site measurements were strongly correlated with counts from geotagged images (79%) and trip reports (91%). The convenient, cost-efficient and timely nature of collecting and analyzing user-generated data could allow land managers to monitor use over different seasons of the year and at sites and scales never previously monitored, contributing to a more comprehensive understanding of recreational use patterns and values. Copyright © 2018 Elsevier Ltd. All rights reserved.
Porcupine: A visual pipeline tool for neuroimaging analysis
Snoek, Lukas; Knapen, Tomas
2018-01-01
The field of neuroimaging is rapidly adopting a more reproducible approach to data acquisition and analysis. Data structures and formats are being standardised and data analyses are getting more automated. However, as data analysis becomes more complicated, researchers often have to write longer analysis scripts, spanning different tools across multiple programming languages. This makes it more difficult to share or recreate code, reducing the reproducibility of the analysis. We present a tool, Porcupine, that constructs one’s analysis visually and automatically produces analysis code. The graphical representation improves understanding of the performed analysis, while retaining the flexibility of modifying the produced code manually to custom needs. Not only does Porcupine produce the analysis code, it also creates a shareable environment for running the code in the form of a Docker image. Together, this forms a reproducible way of constructing, visualising and sharing one’s analysis. Currently, Porcupine links to Nipype functionalities, which in turn accesses most standard neuroimaging analysis tools. Our goal is to release researchers from the constraints of specific implementation details, thereby freeing them to think about novel and creative ways to solve a given problem. Porcupine improves the overview researchers have of their processing pipelines, and facilitates both the development and communication of their work. This will reduce the threshold at which less expert users can generate reusable pipelines. With Porcupine, we bridge the gap between a conceptual and an implementational level of analysis and make it easier for researchers to create reproducible and shareable science. We provide a wide range of examples and documentation, as well as installer files for all platforms on our website: https://timvanmourik.github.io/Porcupine. Porcupine is free, open source, and released under the GNU General Public License v3.0. PMID:29746461
A quantitative framework for flower phenotyping in cultivated carnation (Dianthus caryophyllus L.).
Chacón, Borja; Ballester, Roberto; Birlanga, Virginia; Rolland-Lagan, Anne-Gaëlle; Pérez-Pérez, José Manuel
2013-01-01
Most important breeding goals in ornamental crops are plant appearance and flower characteristics where selection is visually performed on direct offspring of crossings. We developed an image analysis toolbox for the acquisition of flower and petal images from cultivated carnation (Dianthus caryophyllus L.) that was validated by a detailed analysis of flower and petal size and shape in 78 commercial cultivars of D. caryophyllus, including 55 standard, 22 spray and 1 pot carnation cultivars. Correlation analyses allowed us to reduce the number of parameters accounting for the observed variation in flower and petal morphology. Convexity was used as a descriptor for the level of serration in flowers and petals. We used a landmark-based approach that allowed us to identify eight main principal components (PCs) accounting for most of the variance observed in petal shape. The effect and the strength of these PCs in standard and spray carnation cultivars are consistent with shared underlying mechanisms involved in the morphological diversification of petals in both subpopulations. Our results also indicate that neighbor-joining trees built with morphological data might infer certain phylogenetic relationships among carnation cultivars. Based on estimated broad-sense heritability values for some flower and petal features, different genetic determinants shall modulate the responses of flower and petal morphology to environmental cues in this species. We believe our image analysis toolbox could allow capturing flower variation in other species of high ornamental value.
Shah, Nilay D; Naessens, James M; Wood, Douglas L; Stroebel, Robert J; Litchy, William; Wagie, Amy; Fan, Jiaquan; Nesse, Robert
2011-11-01
Some health plans have experimented with increasing consumer cost sharing, on the theory that consumers will use less unnecessary health care if they are expected to bear some of the financial responsibility for it. However, it is unclear whether the resulting decrease in use is sustained beyond one or two years. In 2004 Mayo Clinic's self-funded health plan increased cost sharing for its employees and their dependents for specialty care visits (adding a $25 copayment to the high-premium option) and other services such as imaging, testing, and outpatient procedures (adding 10 or 20 percent coinsurance, depending on the option). The plan also removed all cost sharing for visits to primary care providers and for preventive services such as colorectal screening and mammography. The result was large decreases in the use of diagnostic testing and outpatient procedures that were sustained for four years, and an immediate decrease in the use of imaging that later rebounded (possibly to levels below the expected trend). Beneficiaries decreased visits to specialists but did not make greater use of primary care services. These results suggest that implementing relatively low levels of cost sharing can lead to a long-term decrease in utilization.
Estimating flood discharge using witness movies in post-flood hydrological surveys
NASA Astrophysics Data System (ADS)
Le Coz, Jérôme; Hauet, Alexandre; Le Boursicaud, Raphaël; Pénard, Lionel; Bonnifait, Laurent; Dramais, Guillaume; Thollet, Fabien; Braud, Isabelle
2015-04-01
The estimation of streamflow rates based on post-flood surveys is of paramount importance for the investigation of extreme hydrological events. Major uncertainties usually arise from the absence of information on the flow velocities and from the limited spatio-temporal resolution of such surveys. Nowadays, after each flood occuring in populated areas home movies taken from bridges, river banks or even drones are shared by witnesses through Internet platforms like YouTube. Provided that some topography data and additional information are collected, image-based velocimetry techniques can be applied to some of these movie materials, in order to estimate flood discharges. As a contribution to recent post-flood surveys conducted in France, we developed and applied a method for estimating velocities and discharges based on the Large Scale Particle Image Velocimetry (LSPIV) technique. Since the seminal work of Fujita et al. (1998), LSPIV applications to river flows were reported by a number of authors and LSPIV can now be considered a mature technique. However, its application to non-professional movies taken by flood witnesses remains challenging and required some practical developments. The different steps to apply LSPIV analysis to a flood home movie are as follows: (i) select a video of interest; (ii) contact the author for agreement and extra information; (iii) conduct a field topography campaign to georeference Ground Control Points (GCPs), water level and cross-sectional profiles; (iv) preprocess the video before LSPIV analysis: correct lens distortion, align the images, etc.; (v) orthorectify the images to correct perspective effects and know the physical size of pixels; (vi) proceed with the LSPIV analysis to compute the surface velocity field; and (vii) compute discharge according to a user-defined velocity coefficient. Two case studies in French mountainous rivers during extreme floods are presented. The movies were collected on YouTube and field topography surveys were achieved. Identifying fixed GCPs is more difficult in rural environments than in urban areas. Image processing was performed using free software only, especially Fudaa-LSPIV (Le Coz et al., 2014) was used for steps (v), (vi), and (vii). The results illustrate the typical issues and advantages of flood home movies taken by witnesses for improving post-flood discharge estimation. In spite of the non-ideal conditions related to such movies, the LSPIV technique was successfully applied. Corrections for lens distortion and limited camera movements (shake) are not difficult to achieve. Locating precisely the video viewpoint is often easy whereas precise timing may be not, especially when the author cannot be contacted or when the camera clock is false. Based on sensitivity analysis, the determination of the water level appears to be the main source of uncertainty in the results. Nevertheless, the information content of the results remains highly valuable for post-flood studies, in particular for improving the high-flow extrapolation of hydrometric rating curves. This kind of application opens interesting avenues for participative research in flood hydrology, as well as the study of other extreme geophysical events. Typically, as part of the FloodScale ANR research project (2012-2015), specific communication actions have been focused on the determination of flood discharges within the Ardèche river catchement (France) using home movies shared by observers and volunteers. Safety instructions and a simplified field procedure were shared through local media and were made available in French and English on the project website. This way, simple flood observers or even some enthusiastic flood chasers can contribute to participative hydrological science in the same way the so-called storm chasers have significantly contributed to meteorological science since the Tornado Intercept Project (1972). Website : http://floodscale.irstea.fr/donnees-en/videos-amateurs-de-rivieres-en-crue/ Fujita, I., Muste, M., and Kruger, A. (1998). Large-scale particle image velocimetry for flow analysis in hydraulic engineering applications. Journal of Hydraulic Research, 36(3):397-414. Le Coz, J., Jodeau, M., Hauet, A., Marchand, B., Le Boursicaud, R. (2014). Image-based velocity and discharge measurements in field and laboratory river engineering studies using the free FUDAA-LSPIV software, Proceedings of the International Conference on Fluvial Hydraulics, RIVER FLOW 2014, 1961-1967.
Shared presence in physician-patient communication: A graphic representation.
Ventres, William B; Frankel, Richard M
2015-09-01
Shared presence is a state of being in which physicians and patients enter into a deep sense of trust, respect, and knowing that facilitates healing. Communication between physicians and patients (and, in fact, all providers and recipients of health care) is the medium through which shared presence occurs, regardless of the presenting problem, time available, location of care, or clinical history of the patient. Conceptualizing how communication leads to shared presence has been a challenging task, however. Pathways of this process have been routinely lumped together as the biopsychosocial model or patient, person, and relationship-centered care--all deceptive in their simplicity but, in fact, highly complex--or reduced to descriptive explications of one constituent element (e.g., empathy). In this article, we reconcile these pathways and elements by presenting a graphic image for clinicians and teachers in medical education. This conceptual image serves as a framework to synthesize the vast literature on physician-patient communication. We place shared presence, the fundamental characteristic of effective clinical communication, at the center of our figure. Around this focal point, we locate four elemental factors that either contribute to or result from shared presence, including interpersonal skills, relational contexts, actions in clinical encounters, and healing outcomes. By visually presenting various known and emergent theories of physician-patient communication, outlining the flow of successful encounters between physicians and patients, and noting how such encounters can improve outcomes, physicians, other health care professionals, and medical educators can better grasp the complexity, richness, and potential for achieving shared presence with their patients. (c) 2015 APA, all rights reserved).
Lin, Yun-Bin; Lin, Yu-Pin; Deng, Dong-Po; Chen, Kuan-Wei
2008-02-19
In Taiwan, earthquakes have long been recognized as a major cause oflandslides that are wide spread by floods brought by typhoons followed. Distinguishingbetween landslide spatial patterns in different disturbance regimes is fundamental fordisaster monitoring, management, and land-cover restoration. To circumscribe landslides,this study adopts the normalized difference vegetation index (NDVI), which can bedetermined by simply applying mathematical operations of near-infrared and visible-redspectral data immediately after remotely sensed data is acquired. In real-time disastermonitoring, the NDVI is more effective than using land-cover classifications generatedfrom remotely sensed data as land-cover classification tasks are extremely time consuming.Directional two-dimensional (2D) wavelet analysis has an advantage over traditionalspectrum analysis in that it determines localized variations along a specific direction whenidentifying dominant modes of change, and where those modes are located in multi-temporal remotely sensed images. Open geospatial techniques comprise a series ofsolutions developed based on Open Geospatial Consortium specifications that can beapplied to encode data for interoperability and develop an open geospatial service for sharing data. This study presents a novel approach and framework that uses directional 2Dwavelet analysis of real-time NDVI images to effectively identify landslide patterns andshare resulting patterns via open geospatial techniques. As a case study, this study analyzedNDVI images derived from SPOT HRV images before and after the ChiChi earthquake(7.3 on the Richter scale) that hit the Chenyulan basin in Taiwan, as well as images aftertwo large typhoons (Xangsane and Toraji) to delineate the spatial patterns of landslidescaused by major disturbances. Disturbed spatial patterns of landslides that followed theseevents were successfully delineated using 2D wavelet analysis, and results of patternrecognitions of landslides were distributed simultaneously to other agents using geographymarkup language. Real-time information allows successive platforms (agents) to work withlocal geospatial data for disaster management. Furthermore, the proposed is suitable fordetecting landslides in various regions on continental, regional, and local scales usingremotely sensed data in various resolutions derived from SPOT HRV, IKONOS, andQuickBird multispectral images.
ERIC Educational Resources Information Center
Villano, Matt
2006-01-01
More and more colleges and universities today have discovered electronic record-keeping and record-sharing, made possible by document imaging technology. Across the country, schools such as Monmouth University (New Jersey), Washington State University, the University of Idaho, and Towson University (Maryland) are embracing document imaging. Yet…
JackIn Head: Immersive Visual Telepresence System with Omnidirectional Wearable Camera.
Kasahara, Shunichi; Nagai, Shohei; Rekimoto, Jun
2017-03-01
Sharing one's own immersive experience over the Internet is one of the ultimate goals of telepresence technology. In this paper, we present JackIn Head, a visual telepresence system featuring an omnidirectional wearable camera with image motion stabilization. Spherical omnidirectional video footage taken around the head of a local user is stabilized and then broadcast to others, allowing remote users to explore the immersive visual environment independently of the local user's head direction. We describe the system design of JackIn Head and report the evaluation results of real-time image stabilization and alleviation of cybersickness. Then, through an exploratory observation study, we investigate how individuals can remotely interact, communicate with, and assist each other with our system. We report our observation and analysis of inter-personal communication, demonstrating the effectiveness of our system in augmenting remote collaboration.
Sharing knowledge of Planetary Datasets through the Web-Based PRoGIS
NASA Astrophysics Data System (ADS)
Giordano, M. G.; Morley, J. M.; Muller, J. P. M.; Barnes, R. B.; Tao, Y. T.
2015-10-01
The large amount of raw and derived data available from various planetary surface missions (e.g. Mars and Moon in our case) has been integrated withco-registered and geocoded orbital image data to provide rover traverses and camera site locations in universal global co-ordinates [1]. This then allows an integrated GIS to use these geocoded products for scientific applications: we aim to create a web interface, PRoGIS, with minimal controls focusing on the usability and visualisation of the data, to allow planetary geologists to share annotated surface observations. These observations in a common context are shared between different tools and software (PRoGIS, Pro3D, 3D point cloud viewer). Our aim is to use only Open Source components that integrate Open Web Services for planetary data to make available an universal platform with a WebGIS interface, as well as a 3D point cloud and a Panorama viewer to explore derived data. On top of these tools we are building capabilities to make and share annotations amongst users. We use Python and Django for the server-side framework and Open Layers 3 for the WebGIS client. For good performance previewing 3D data (point clouds, pictures on the surface and panoramas) we employ ThreeJS, a WebGL Javascript library. Additionally, user and group controls allow scientists to store and share their observations. PRoGIS not only displays data but also launches sophisticated 3D vision reprocessing (PRoVIP) and an immersive 3D analysis environment (PRo3D).
The concept of shared mental models in healthcare collaboration.
McComb, Sara; Simpson, Vicki
2014-07-01
To report an analysis of the concept of shared mental models in health care. Shared mental models have been described as facilitators of effective teamwork. The complexity and criticality of the current healthcare system requires shared mental models to enhance safe and effective patient/client care. Yet, the current concept definition in the healthcare literature is vague and, therefore, difficult to apply consistently in research and practice. Concept analysis. Literature for this concept analysis was retrieved from several databases, including CINAHL, PubMed and MEDLINE (EBSCO Interface), for the years 1997-2013. Walker and Avant's approach to concept analysis was employed and, following Paley's guidance, embedded in extant theory from the team literature. Although teamwork and collaboration are discussed frequently in healthcare literature, the concept of shared mental models in that context is not as commonly found but is increasing in appearance. Our concept analysis defines shared mental models as individually held knowledge structures that help team members function collaboratively in their environments and are comprised of the attributes of content, similarity, accuracy and dynamics. This theoretically grounded concept analysis provides a foundation for a middle-range descriptive theory of shared mental models in nursing and health care. Further research concerning the impact of shared mental models in the healthcare setting can result in development and refinement of shared mental models to support effective teamwork and collaboration. © 2013 John Wiley & Sons Ltd.
Global Systems Science and Hands-On Universe Course Materials for High School
NASA Astrophysics Data System (ADS)
Gould, A.
2011-09-01
The University of California Berkeley's Lawrence Hall of Science has a project called Global Systems Science (GSS). GSS produced a set of course materials for high school science education that includes reading materials, investigations, and software for analyzing satellite images of Earth focusing on Earth systems as well as societal issues that require interdisciplinary science for full understanding. The software has general application in analysis of any digital images for a variety of purposes. NSF and NASA funding have contributed to the development of GSS. The current NASA-funded project of GSS is Lifelines for High School Climate Change Education (LHSCCE), which aims to establish professional learning communities (PLCs) to share curriculum resources and best practices for teaching about climate change in grades 9-12. The project explores ideal ways for teachers to meet either in-person or using simple yet effective distance-communication techniques (tele-meetings), depending on local preferences. Skills promoted include: how to set up a website to share resources; initiating tele-meetings with any available mechanism (webinars, Skype, telecons, moodles, social network tools, etc.); and easy ways of documenting and archiving presentations made at meetings. Twenty teacher leaders are forming the PLCs in their regions or districts. This is a national effort in which teachers share ideas, strategies, and resources aimed at making science education relevant to societal issues, improve students' understanding of climate change issues, and contribute to possible solutions. Although the binding theme is climate change, the application is to a wide variety of courses: Earth science, environmental science, biology, physics, and chemistry. Moreover, the PLCs formed can last as long as the members find it useful and can deal with any topics of interest, even if they are only distantly related to climate change.
Manifold regularized multitask learning for semi-supervised multilabel image classification.
Luo, Yong; Tao, Dacheng; Geng, Bo; Xu, Chao; Maybank, Stephen J
2013-02-01
It is a significant challenge to classify images with multiple labels by using only a small number of labeled samples. One option is to learn a binary classifier for each label and use manifold regularization to improve the classification performance by exploring the underlying geometric structure of the data distribution. However, such an approach does not perform well in practice when images from multiple concepts are represented by high-dimensional visual features. Thus, manifold regularization is insufficient to control the model complexity. In this paper, we propose a manifold regularized multitask learning (MRMTL) algorithm. MRMTL learns a discriminative subspace shared by multiple classification tasks by exploiting the common structure of these tasks. It effectively controls the model complexity because different tasks limit one another's search volume, and the manifold regularization ensures that the functions in the shared hypothesis space are smooth along the data manifold. We conduct extensive experiments, on the PASCAL VOC'07 dataset with 20 classes and the MIR dataset with 38 classes, by comparing MRMTL with popular image classification algorithms. The results suggest that MRMTL is effective for image classification.
Red, Purple and Pink: The Colors of Diffusion on Pinterest
Bakhshi, Saeideh; Gilbert, Eric
2015-01-01
Many lab studies have shown that colors can evoke powerful emotions and impact human behavior. Might these phenomena drive how we act online? A key research challenge for image-sharing communities is uncovering the mechanisms by which content spreads through the community. In this paper, we investigate whether there is link between color and diffusion. Drawing on a corpus of one million images crawled from Pinterest, we find that color significantly impacts the diffusion of images and adoption of content on image sharing communities such as Pinterest, even after partially controlling for network structure and activity. Specifically, Red, Purple and pink seem to promote diffusion, while Green, Blue, Black and Yellow suppress it. To our knowledge, our study is the first to investigate how colors relate to online user behavior. In addition to contributing to the research conversation surrounding diffusion, these findings suggest future work using sophisticated computer vision techniques. We conclude with a discussion on the theoretical, practical and design implications suggested by this work—e.g. design of engaging image filters. PMID:25658423
Neural network fusion: a novel CT-MR aortic aneurysm image segmentation method
NASA Astrophysics Data System (ADS)
Wang, Duo; Zhang, Rui; Zhu, Jin; Teng, Zhongzhao; Huang, Yuan; Spiga, Filippo; Du, Michael Hong-Fei; Gillard, Jonathan H.; Lu, Qingsheng; Liò, Pietro
2018-03-01
Medical imaging examination on patients usually involves more than one imaging modalities, such as Computed Tomography (CT), Magnetic Resonance (MR) and Positron Emission Tomography(PET) imaging. Multimodal imaging allows examiners to benefit from the advantage of each modalities. For example, for Abdominal Aortic Aneurysm, CT imaging shows calcium deposits in the aorta clearly while MR imaging distinguishes thrombus and soft tissues better.1 Analysing and segmenting both CT and MR images to combine the results will greatly help radiologists and doctors to treat the disease. In this work, we present methods on using deep neural network models to perform such multi-modal medical image segmentation. As CT image and MR image of the abdominal area cannot be well registered due to non-affine deformations, a naive approach is to train CT and MR segmentation network separately. However, such approach is time-consuming and resource-inefficient. We propose a new approach to fuse the high-level part of the CT and MR network together, hypothesizing that neurons recognizing the high level concepts of Aortic Aneurysm can be shared across multiple modalities. Such network is able to be trained end-to-end with non-registered CT and MR image using shorter training time. Moreover network fusion allows a shared representation of Aorta in both CT and MR images to be learnt. Through experiments we discovered that for parts of Aorta showing similar aneurysm conditions, their neural presentations in neural network has shorter distances. Such distances on the feature level is helpful for registering CT and MR image.
Cascaded systems analysis of photon counting detectors
Xu, J.; Zbijewski, W.; Gang, G.; Stayman, J. W.; Taguchi, K.; Lundqvist, M.; Fredenberg, E.; Carrino, J. A.; Siewerdsen, J. H.
2014-01-01
Purpose: Photon counting detectors (PCDs) are an emerging technology with applications in spectral and low-dose radiographic and tomographic imaging. This paper develops an analytical model of PCD imaging performance, including the system gain, modulation transfer function (MTF), noise-power spectrum (NPS), and detective quantum efficiency (DQE). Methods: A cascaded systems analysis model describing the propagation of quanta through the imaging chain was developed. The model was validated in comparison to the physical performance of a silicon-strip PCD implemented on an experimental imaging bench. The signal response, MTF, and NPS were measured and compared to theory as a function of exposure conditions (70 kVp, 1–7 mA), detector threshold, and readout mode (i.e., the option for coincidence detection). The model sheds new light on the dependence of spatial resolution, charge sharing, and additive noise effects on threshold selection and was used to investigate the factors governing PCD performance, including the fundamental advantages and limitations of PCDs in comparison to energy-integrating detectors (EIDs) in the linear regime for which pulse pileup can be ignored. Results: The detector exhibited highly linear mean signal response across the system operating range and agreed well with theoretical prediction, as did the system MTF and NPS. The DQE analyzed as a function of kilovolt (peak), exposure, detector threshold, and readout mode revealed important considerations for system optimization. The model also demonstrated the important implications of false counts from both additive electronic noise and charge sharing and highlighted the system design and operational parameters that most affect detector performance in the presence of such factors: for example, increasing the detector threshold from 0 to 100 (arbitrary units of pulse height threshold roughly equivalent to 0.5 and 6 keV energy threshold, respectively), increased the f50 (spatial-frequency at which the MTF falls to a value of 0.50) by ∼30% with corresponding improvement in DQE. The range in exposure and additive noise for which PCDs yield intrinsically higher DQE was quantified, showing performance advantages under conditions of very low-dose, high additive noise, and high fidelity rejection of coincident photons. Conclusions: The model for PCD signal and noise performance agreed with measurements of detector signal, MTF, and NPS and provided a useful basis for understanding complex dependencies in PCD imaging performance and the potential advantages (and disadvantages) in comparison to EIDs as well as an important guide to task-based optimization in developing new PCD imaging systems. PMID:25281959
Cascaded systems analysis of photon counting detectors.
Xu, J; Zbijewski, W; Gang, G; Stayman, J W; Taguchi, K; Lundqvist, M; Fredenberg, E; Carrino, J A; Siewerdsen, J H
2014-10-01
Photon counting detectors (PCDs) are an emerging technology with applications in spectral and low-dose radiographic and tomographic imaging. This paper develops an analytical model of PCD imaging performance, including the system gain, modulation transfer function (MTF), noise-power spectrum (NPS), and detective quantum efficiency (DQE). A cascaded systems analysis model describing the propagation of quanta through the imaging chain was developed. The model was validated in comparison to the physical performance of a silicon-strip PCD implemented on an experimental imaging bench. The signal response, MTF, and NPS were measured and compared to theory as a function of exposure conditions (70 kVp, 1-7 mA), detector threshold, and readout mode (i.e., the option for coincidence detection). The model sheds new light on the dependence of spatial resolution, charge sharing, and additive noise effects on threshold selection and was used to investigate the factors governing PCD performance, including the fundamental advantages and limitations of PCDs in comparison to energy-integrating detectors (EIDs) in the linear regime for which pulse pileup can be ignored. The detector exhibited highly linear mean signal response across the system operating range and agreed well with theoretical prediction, as did the system MTF and NPS. The DQE analyzed as a function of kilovolt (peak), exposure, detector threshold, and readout mode revealed important considerations for system optimization. The model also demonstrated the important implications of false counts from both additive electronic noise and charge sharing and highlighted the system design and operational parameters that most affect detector performance in the presence of such factors: for example, increasing the detector threshold from 0 to 100 (arbitrary units of pulse height threshold roughly equivalent to 0.5 and 6 keV energy threshold, respectively), increased the f50 (spatial-frequency at which the MTF falls to a value of 0.50) by ∼30% with corresponding improvement in DQE. The range in exposure and additive noise for which PCDs yield intrinsically higher DQE was quantified, showing performance advantages under conditions of very low-dose, high additive noise, and high fidelity rejection of coincident photons. The model for PCD signal and noise performance agreed with measurements of detector signal, MTF, and NPS and provided a useful basis for understanding complex dependencies in PCD imaging performance and the potential advantages (and disadvantages) in comparison to EIDs as well as an important guide to task-based optimization in developing new PCD imaging systems.
Providers' Access of Imaging Versus Only Reports: A System Log File Analysis.
Jung, Hye-Young; Gichoya, Judy Wawira; Vest, Joshua R
2017-02-01
An increasing number of technologies allow providers to access the results of imaging studies. This study examined differences in access of radiology images compared with text-only reports through a health information exchange system by health care professionals. The study sample included 157,256 historical sessions from a health information exchange system that enabled 1,670 physicians and non-physicians to access text-based reports and imaging over the period 2013 to 2014. The primary outcome was an indicator of access of an imaging study instead of access of a text-only report. Multilevel mixed-effects regression models were used to estimate the association between provider and session characteristics and access of images compared with text-only reports. Compared with primary care physicians, specialists had an 18% higher probability of accessing actual images instead of text-only reports (β = 0.18; P < .001). Compared with primary care practice settings, the probability of accessing images was 4% higher for specialty care practices (P < .05) and 8% lower for emergency departments (P < .05). Radiologists, orthopedists, and neurologists accounted for 79% of all the sessions with actual images accessed. Orthopedists, radiologists, surgeons, and pulmonary disease specialists accessed imaging more often than text-based reports only. Consideration for differences in the need to access images compared with text-only reports based on the type of provider and setting of care are needed to maximize the benefits of image sharing for patient care. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Improved Secret Image Sharing Scheme in Embedding Capacity without Underflow and Overflow.
Pang, Liaojun; Miao, Deyu; Li, Huixian; Wang, Qiong
2015-01-01
Computational secret image sharing (CSIS) is an effective way to protect a secret image during its transmission and storage, and thus it has attracted lots of attentions since its appearance. Nowadays, it has become a hot topic for researchers to improve the embedding capacity and eliminate the underflow and overflow situations, which is embarrassing and difficult to deal with. The scheme, which has the highest embedding capacity among the existing schemes, has the underflow and overflow problems. Although the underflow and overflow situations have been well dealt with by different methods, the embedding capacities of these methods are reduced more or less. Motivated by these concerns, we propose a novel scheme, in which we take the differential coding, Huffman coding, and data converting to compress the secret image before embedding it to further improve the embedding capacity, and the pixel mapping matrix embedding method with a newly designed matrix is used to embed secret image data into the cover image to avoid the underflow and overflow situations. Experiment results show that our scheme can improve the embedding capacity further and eliminate the underflow and overflow situations at the same time.
Improved Secret Image Sharing Scheme in Embedding Capacity without Underflow and Overflow
Pang, Liaojun; Miao, Deyu; Li, Huixian; Wang, Qiong
2015-01-01
Computational secret image sharing (CSIS) is an effective way to protect a secret image during its transmission and storage, and thus it has attracted lots of attentions since its appearance. Nowadays, it has become a hot topic for researchers to improve the embedding capacity and eliminate the underflow and overflow situations, which is embarrassing and difficult to deal with. The scheme, which has the highest embedding capacity among the existing schemes, has the underflow and overflow problems. Although the underflow and overflow situations have been well dealt with by different methods, the embedding capacities of these methods are reduced more or less. Motivated by these concerns, we propose a novel scheme, in which we take the differential coding, Huffman coding, and data converting to compress the secret image before embedding it to further improve the embedding capacity, and the pixel mapping matrix embedding method with a newly designed matrix is used to embed secret image data into the cover image to avoid the underflow and overflow situations. Experiment results show that our scheme can improve the embedding capacity further and eliminate the underflow and overflow situations at the same time. PMID:26351657
D3: A Collaborative Infrastructure for Aerospace Design
NASA Technical Reports Server (NTRS)
Walton, Joan; Filman, Robert E.; Knight, Chris; Korsmeyer, David J.; Lee, Diana D.; Clancy, Daniel (Technical Monitor)
2001-01-01
DARWIN is a NASA developed, Internet-based system for enabling aerospace researchers to securely and remotely access and collaborate on the analysis of aerospace vehicle design data, primarily the results of wind-tunnel testing and numeric (e.g., computational fluid dynamics) model executions. DARWIN captures, stores and indexes data, manages derived knowledge (such as visualizations across multiple data sets) and provides an environment for designers to collaborate in the analysis of the results of testing. DARWIN is an interesting application because it supports high volumes of data, integrates multiple modalities of data display (e.g. images and data visualizations), and provides non-trivial access control mechanisms. DARWIN enables collaboration by allowing not only sharing visualizations of data, but also commentary about and view of data.
The Precategorical Nature of Visual Short-Term Memory
ERIC Educational Resources Information Center
Quinlan, Philip T.; Cohen, Dale J.
2016-01-01
We conducted a series of recognition experiments that assessed whether visual short-term memory (VSTM) is sensitive to shared category membership of to-be-remembered (tbr) images of common objects. In Experiment 1 some of the tbr items shared the same basic level category (e.g., hand axe): Such items were no better retained than others. In the…
MEG-BIDS, the brain imaging data structure extended to magnetoencephalography
Niso, Guiomar; Gorgolewski, Krzysztof J.; Bock, Elizabeth; Brooks, Teon L.; Flandin, Guillaume; Gramfort, Alexandre; Henson, Richard N.; Jas, Mainak; Litvak, Vladimir; T. Moreau, Jeremy; Oostenveld, Robert; Schoffelen, Jan-Mathijs; Tadel, Francois; Wexler, Joseph; Baillet, Sylvain
2018-01-01
We present a significant extension of the Brain Imaging Data Structure (BIDS) to support the specific aspects of magnetoencephalography (MEG) data. MEG measures brain activity with millisecond temporal resolution and unique source imaging capabilities. So far, BIDS was a solution to organise magnetic resonance imaging (MRI) data. The nature and acquisition parameters of MRI and MEG data are strongly dissimilar. Although there is no standard data format for MEG, we propose MEG-BIDS as a principled solution to store, organise, process and share the multidimensional data volumes produced by the modality. The standard also includes well-defined metadata, to facilitate future data harmonisation and sharing efforts. This responds to unmet needs from the multimodal neuroimaging community and paves the way to further integration of other techniques in electrophysiology. MEG-BIDS builds on MRI-BIDS, extending BIDS to a multimodal data structure. We feature several data-analytics software that have adopted MEG-BIDS, and a diverse sample of open MEG-BIDS data resources available to everyone. PMID:29917016
MEG-BIDS, the brain imaging data structure extended to magnetoencephalography.
Niso, Guiomar; Gorgolewski, Krzysztof J; Bock, Elizabeth; Brooks, Teon L; Flandin, Guillaume; Gramfort, Alexandre; Henson, Richard N; Jas, Mainak; Litvak, Vladimir; T Moreau, Jeremy; Oostenveld, Robert; Schoffelen, Jan-Mathijs; Tadel, Francois; Wexler, Joseph; Baillet, Sylvain
2018-06-19
We present a significant extension of the Brain Imaging Data Structure (BIDS) to support the specific aspects of magnetoencephalography (MEG) data. MEG measures brain activity with millisecond temporal resolution and unique source imaging capabilities. So far, BIDS was a solution to organise magnetic resonance imaging (MRI) data. The nature and acquisition parameters of MRI and MEG data are strongly dissimilar. Although there is no standard data format for MEG, we propose MEG-BIDS as a principled solution to store, organise, process and share the multidimensional data volumes produced by the modality. The standard also includes well-defined metadata, to facilitate future data harmonisation and sharing efforts. This responds to unmet needs from the multimodal neuroimaging community and paves the way to further integration of other techniques in electrophysiology. MEG-BIDS builds on MRI-BIDS, extending BIDS to a multimodal data structure. We feature several data-analytics software that have adopted MEG-BIDS, and a diverse sample of open MEG-BIDS data resources available to everyone.
MGH-USC Human Connectome Project Datasets with Ultra-High b-Value Diffusion MRI
Fan, Qiuyun; Witzel, Thomas; Nummenmaa, Aapo; Van Dijk, Koene R.A.; Van Horn, John D.; Drews, Michelle K.; Somerville, Leah H.; Sheridan, Margaret A.; Santillana, Rosario M.; Snyder, Jenna; Hedden, Trey; Shaw, Emily E.; Hollinshead, Marisa O.; Renvall, Ville; Zanzonico, Roberta; Keil, Boris; Cauley, Stephen; Polimeni, Jonathan R.; Tisdall, Dylan; Buckner, Randy L.; Wedeen, Van J.; Wald, Lawrence L.; Toga, Arthur W.; Rosen, Bruce R.
2015-01-01
The MGH-USC CONNECTOM MRI scanner housed at the Massachusetts General Hospital (MGH) is a major hardware innovation of the Human Connectome Project (HCP). The 3T CONNECTOM scanner is capable of producing magnetic field gradient of up to 300 mT/m strength for in vivo human brain imaging, which greatly shortens the time spent on diffusion encoding, and decreases the signal loss due to T2 decay. To demonstrate the capability of the novel gradient system, data of healthy adult participants were acquired for this MGH-USC Adult Diffusion Dataset (N=35), minimally preprocessed, and shared through the Laboratory of Neuro Imaging Image Data Archive (LONI IDA) and the WU-Minn Connectome Database (ConnecomeDB). Another purpose of sharing the data is to facilitate methodological studies of diffusion MRI (dMRI) analyses utilizing high diffusion contrast, which perhaps is not easily feasible with standard MR gradient system. In addition, acquisition of the MGH-Harvard-USC Lifespan Dataset is currently underway to include 120 healthy participants ranging from 8 to 90 years old, which will also be shared through LONI IDA and ConnectomeDB. Here we describe the efforts of the MGH-USC HCP consortium in acquiring and sharing the ultra-high b-value diffusion MRI data and provide a report on data preprocessing and access. We conclude with a demonstration of the example data, along with results of standard diffusion analyses, including q-ball Orientation Distribution Function (ODF) reconstruction and tractography. PMID:26364861
Neuronal Morphology goes Digital: A Research Hub for Cellular and System Neuroscience
Parekh, Ruchi; Ascoli, Giorgio A.
2013-01-01
Summary The importance of neuronal morphology in brain function has been recognized for over a century. The broad applicability of “digital reconstructions” of neuron morphology across neuroscience sub-disciplines has stimulated the rapid development of numerous synergistic tools for data acquisition, anatomical analysis, three-dimensional rendering, electrophysiological simulation, growth models, and data sharing. Here we discuss the processes of histological labeling, microscopic imaging, and semi-automated tracing. Moreover, we provide an annotated compilation of currently available resources in this rich research “ecosystem” as a central reference for experimental and computational neuroscience. PMID:23522039
Big data sharing and analysis to advance research in post-traumatic epilepsy.
Duncan, Dominique; Vespa, Paul; Pitkanen, Asla; Braimah, Adebayo; Lapinlampi, Nina; Toga, Arthur W
2018-06-01
We describe the infrastructure and functionality for a centralized preclinical and clinical data repository and analytic platform to support importing heterogeneous multi-modal data, automatically and manually linking data across modalities and sites, and searching content. We have developed and applied innovative image and electrophysiology processing methods to identify candidate biomarkers from MRI, EEG, and multi-modal data. Based on heterogeneous biomarkers, we present novel analytic tools designed to study epileptogenesis in animal model and human with the goal of tracking the probability of developing epilepsy over time. Copyright © 2017. Published by Elsevier Inc.
Litchman, Michelle L; Allen, Nancy A; Colicchio, Vanessa D; Wawrzynski, Sarah E; Sparling, Kerri M; Hendricks, Krissa L; Berg, Cynthia A
2018-01-01
Little research exists regarding how real-time continuous glucose monitoring (RT-CGM) data sharing plays a role in the relationship between patients and their care partners. To (1) identify the benefits and challenges related to RT-CGM data sharing from the patient and care partner perspective and (2) to explore the number and type of individuals who share and follow RT-CGM data. This qualitative content analysis was conducted by examining publicly available blogs focused on RT-CGM and data sharing. A thematic analysis of blogs and associated comments was conducted. A systematic appraisal of personal blogs examined 39 blogs with 206 corresponding comments. The results of the study provided insight about the benefits and challenges related to individuals with diabetes sharing their RT-CGM data with a care partner(s). The analysis resulted in three themes: (1) RT-CGM data sharing enhances feelings of safety, (2) the need to communicate boundaries to avoid judgment, and (3) choice about sharing and following RT-CGM data. RT-CGM data sharing occurred within dyads (n = 46), triads (n = 15), and tetrads (n = 2). Adults and children with type 1 diabetes and their care partners are empowered by the ability to share and follow RT-CGM data. Our findings suggest that RT-CGM data sharing between an individual with diabetes and their care partner can complicate relationships. Healthcare providers need to engage patients and care partners in discussions about best practices related to RT-CGM sharing and following to avoid frustrations within the relationship.
Pupillometry reveals the physiological underpinnings of the aversion to holes.
Ayzenberg, Vladislav; Hickey, Meghan R; Lourenco, Stella F
2018-01-01
An unusual, but common, aversion to images with clusters of holes is known as trypophobia. Recent research suggests that trypophobic reactions are caused by visual spectral properties also present in aversive images of evolutionary threatening animals (e.g., snakes and spiders). However, despite similar spectral properties, it remains unknown whether there is a shared emotional response to holes and threatening animals. Whereas snakes and spiders are known to elicit a fear reaction, associated with the sympathetic nervous system, anecdotal reports from self-described trypophobes suggest reactions more consistent with disgust, which is associated with activation of the parasympathetic nervous system. Here we used pupillometry in a novel attempt to uncover the distinct emotional response associated with a trypophobic response to holes. Across two experiments, images of holes elicited greater constriction compared to images of threatening animals and neutral images. Moreover, this effect held when controlling for level of arousal and accounting for the pupil grating response. This pattern of pupillary response is consistent with involvement of the parasympathetic nervous system and suggests a disgust, not a fear, response to images of holes. Although general aversion may be rooted in shared visual-spectral properties, we propose that the specific emotion is determined by cognitive appraisal of the distinct image content.
Pupillometry reveals the physiological underpinnings of the aversion to holes
Hickey, Meghan R.
2018-01-01
An unusual, but common, aversion to images with clusters of holes is known as trypophobia. Recent research suggests that trypophobic reactions are caused by visual spectral properties also present in aversive images of evolutionary threatening animals (e.g., snakes and spiders). However, despite similar spectral properties, it remains unknown whether there is a shared emotional response to holes and threatening animals. Whereas snakes and spiders are known to elicit a fear reaction, associated with the sympathetic nervous system, anecdotal reports from self-described trypophobes suggest reactions more consistent with disgust, which is associated with activation of the parasympathetic nervous system. Here we used pupillometry in a novel attempt to uncover the distinct emotional response associated with a trypophobic response to holes. Across two experiments, images of holes elicited greater constriction compared to images of threatening animals and neutral images. Moreover, this effect held when controlling for level of arousal and accounting for the pupil grating response. This pattern of pupillary response is consistent with involvement of the parasympathetic nervous system and suggests a disgust, not a fear, response to images of holes. Although general aversion may be rooted in shared visual-spectral properties, we propose that the specific emotion is determined by cognitive appraisal of the distinct image content. PMID:29312818
Space Images for NASA JPL Android Version
NASA Technical Reports Server (NTRS)
Nelson, Jon D.; Gutheinz, Sandy C.; Strom, Joshua R.; Arca, Jeremy M.; Perez, Martin; Boggs, Karen; Stanboli, Alice
2013-01-01
This software addresses the demand for easily accessible NASA JPL images and videos by providing a user friendly and simple graphical user interface that can be run via the Android platform from any location where Internet connection is available. This app is complementary to the iPhone version of the application. A backend infrastructure stores, tracks, and retrieves space images from the JPL Photojournal and Institutional Communications Web server, and catalogs the information into a streamlined rating infrastructure. This system consists of four distinguishing components: image repository, database, server-side logic, and Android mobile application. The image repository contains images from various JPL flight projects. The database stores the image information as well as the user rating. The server-side logic retrieves the image information from the database and categorizes each image for display. The Android mobile application is an interfacing delivery system that retrieves the image information from the server for each Android mobile device user. Also created is a reporting and tracking system for charting and monitoring usage. Unlike other Android mobile image applications, this system uses the latest emerging technologies to produce image listings based directly on user input. This allows for countless combinations of images returned. The backend infrastructure uses industry-standard coding and database methods, enabling future software improvement and technology updates. The flexibility of the system design framework permits multiple levels of display possibilities and provides integration capabilities. Unique features of the software include image/video retrieval from a selected set of categories, image Web links that can be shared among e-mail users, sharing to Facebook/Twitter, marking as user's favorites, and image metadata searchable for instant results.
Meta-analysis of randomized clinical trials in the era of individual patient data sharing.
Kawahara, Takuya; Fukuda, Musashi; Oba, Koji; Sakamoto, Junichi; Buyse, Marc
2018-06-01
Individual patient data (IPD) meta-analysis is considered to be a gold standard when the results of several randomized trials are combined. Recent initiatives on sharing IPD from clinical trials offer unprecedented opportunities for using such data in IPD meta-analyses. First, we discuss the evidence generated and the benefits obtained by a long-established prospective IPD meta-analysis in early breast cancer. Next, we discuss a data-sharing system that has been adopted by several pharmaceutical sponsors. We review a number of retrospective IPD meta-analyses that have already been proposed using this data-sharing system. Finally, we discuss the role of data sharing in IPD meta-analysis in the future. Treatment effects can be more reliably estimated in both types of IPD meta-analyses than with summary statistics extracted from published papers. Specifically, with rich covariate information available on each patient, prognostic and predictive factors can be identified or confirmed. Also, when several endpoints are available, surrogate endpoints can be assessed statistically. Although there are difficulties in conducting, analyzing, and interpreting retrospective IPD meta-analysis utilizing the currently available data-sharing systems, data sharing will play an important role in IPD meta-analysis in the future.
Pigeons (Columba livia) as Trainable Observers of Pathology and Radiology Breast Cancer Images
Levenson, Richard M.; Krupinski, Elizabeth A.; Navarro, Victor M.; Wasserman, Edward A.
2015-01-01
Pathologists and radiologists spend years acquiring and refining their medically essential visual skills, so it is of considerable interest to understand how this process actually unfolds and what image features and properties are critical for accurate diagnostic performance. Key insights into human behavioral tasks can often be obtained by using appropriate animal models. We report here that pigeons (Columba livia)—which share many visual system properties with humans—can serve as promising surrogate observers of medical images, a capability not previously documented. The birds proved to have a remarkable ability to distinguish benign from malignant human breast histopathology after training with differential food reinforcement; even more importantly, the pigeons were able to generalize what they had learned when confronted with novel image sets. The birds’ histological accuracy, like that of humans, was modestly affected by the presence or absence of color as well as by degrees of image compression, but these impacts could be ameliorated with further training. Turning to radiology, the birds proved to be similarly capable of detecting cancer-relevant microcalcifications on mammogram images. However, when given a different (and for humans quite difficult) task—namely, classification of suspicious mammographic densities (masses)—the pigeons proved to be capable only of image memorization and were unable to successfully generalize when shown novel examples. The birds’ successes and difficulties suggest that pigeons are well-suited to help us better understand human medical image perception, and may also prove useful in performance assessment and development of medical imaging hardware, image processing, and image analysis tools. PMID:26581091
Pigeons (Columba livia) as Trainable Observers of Pathology and Radiology Breast Cancer Images.
Levenson, Richard M; Krupinski, Elizabeth A; Navarro, Victor M; Wasserman, Edward A
2015-01-01
Pathologists and radiologists spend years acquiring and refining their medically essential visual skills, so it is of considerable interest to understand how this process actually unfolds and what image features and properties are critical for accurate diagnostic performance. Key insights into human behavioral tasks can often be obtained by using appropriate animal models. We report here that pigeons (Columba livia)-which share many visual system properties with humans-can serve as promising surrogate observers of medical images, a capability not previously documented. The birds proved to have a remarkable ability to distinguish benign from malignant human breast histopathology after training with differential food reinforcement; even more importantly, the pigeons were able to generalize what they had learned when confronted with novel image sets. The birds' histological accuracy, like that of humans, was modestly affected by the presence or absence of color as well as by degrees of image compression, but these impacts could be ameliorated with further training. Turning to radiology, the birds proved to be similarly capable of detecting cancer-relevant microcalcifications on mammogram images. However, when given a different (and for humans quite difficult) task-namely, classification of suspicious mammographic densities (masses)-the pigeons proved to be capable only of image memorization and were unable to successfully generalize when shown novel examples. The birds' successes and difficulties suggest that pigeons are well-suited to help us better understand human medical image perception, and may also prove useful in performance assessment and development of medical imaging hardware, image processing, and image analysis tools.
Electron Tomography: A Three-Dimensional Analytic Tool for Hard and Soft Materials Research.
Ercius, Peter; Alaidi, Osama; Rames, Matthew J; Ren, Gang
2015-10-14
Three-dimensional (3D) structural analysis is essential to understand the relationship between the structure and function of an object. Many analytical techniques, such as X-ray diffraction, neutron spectroscopy, and electron microscopy imaging, are used to provide structural information. Transmission electron microscopy (TEM), one of the most popular analytic tools, has been widely used for structural analysis in both physical and biological sciences for many decades, in which 3D objects are projected into two-dimensional (2D) images. In many cases, 2D-projection images are insufficient to understand the relationship between the 3D structure and the function of nanoscale objects. Electron tomography (ET) is a technique that retrieves 3D structural information from a tilt series of 2D projections, and is gradually becoming a mature technology with sub-nanometer resolution. Distinct methods to overcome sample-based limitations have been separately developed in both physical and biological science, although they share some basic concepts of ET. This review discusses the common basis for 3D characterization, and specifies difficulties and solutions regarding both hard and soft materials research. It is hoped that novel solutions based on current state-of-the-art techniques for advanced applications in hybrid matter systems can be motivated. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Nenadić, Igor; Hoof, Anna; Dietzek, Maren; Langbein, Kerstin; Reichenbach, Jürgen R; Sauer, Heinrich; Güllmar, Daniel
2017-08-30
Both schizophrenia and bipolar disorder show abnormalities of white matter, as seen in diffusion tensor imaging (DTI) analyses of major brain fibre bundles. While studies in each of the two conditions have indicated possible overlap in anatomical location, there are few direct comparisons between the disorders. Also, it is unclear whether phenotypically similar subgroups (e.g. patients with bipolar disorder and psychotic features) might share white matter pathologies or be rather similar. Using region-of-interest (ROI) analysis of white matter with diffusion tensor imaging (DTI) at 3 T, we analysed fractional anisotropy (FA), radial diffusivity (RD), and apparent diffusion coefficient (ADC) of the corpus callosum and cingulum bundle in 33 schizophrenia patients, 17 euthymic (previously psychotic) bipolar disorder patients, and 36 healthy controls. ANOVA analysis showed significant main effects of group for RD and ADC (both elevated in schizophrenia). Across the corpus callosum ROIs, there was not group effect on FA, but for RD (elevated in schizophrenia, lower in bipolar disorder) and ADC (higher in schizophrenia, intermediate in bipolar disorder). Our findings show similarities and difference (some gradual) across regions of the two major fibre tracts implicated in these disorders, which would be consistent with a neurobiological overlap of similar clinical phenotypes. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Electron Tomography: A Three-Dimensional Analytic Tool for Hard and Soft Materials Research
Alaidi, Osama; Rames, Matthew J.
2016-01-01
Three-dimensional (3D) structural analysis is essential to understand the relationship between the structure and function of an object. Many analytical techniques, such as X-ray diffraction, neutron spectroscopy, and electron microscopy imaging, are used to provide structural information. Transmission electron microscopy (TEM), one of the most popular analytic tools, has been widely used for structural analysis in both physical and biological sciences for many decades, in which 3D objects are projected into two-dimensional (2D) images. In many cases, 2D-projection images are insufficient to understand the relationship between the 3D structure and the function of nanoscale objects. Electron tomography (ET) is a technique that retrieves 3D structural information from a tilt series of 2D projections, and is gradually becoming a mature technology with sub-nanometer resolution. Distinct methods to overcome sample-based limitations have been separately developed in both physical and biological science, although they share some basic concepts of ET. This review discusses the common basis for 3D characterization, and specifies difficulties and solutions regarding both hard and soft materials research. It is hoped that novel solutions based on current state-of-the-art techniques for advanced applications in hybrid matter systems can be motivated. PMID:26087941
Multi-scale image segmentation method with visual saliency constraints and its application
NASA Astrophysics Data System (ADS)
Chen, Yan; Yu, Jie; Sun, Kaimin
2018-03-01
Object-based image analysis method has many advantages over pixel-based methods, so it is one of the current research hotspots. It is very important to get the image objects by multi-scale image segmentation in order to carry out object-based image analysis. The current popular image segmentation methods mainly share the bottom-up segmentation principle, which is simple to realize and the object boundaries obtained are accurate. However, the macro statistical characteristics of the image areas are difficult to be taken into account, and fragmented segmentation (or over-segmentation) results are difficult to avoid. In addition, when it comes to information extraction, target recognition and other applications, image targets are not equally important, i.e., some specific targets or target groups with particular features worth more attention than the others. To avoid the problem of over-segmentation and highlight the targets of interest, this paper proposes a multi-scale image segmentation method with visually saliency graph constraints. Visual saliency theory and the typical feature extraction method are adopted to obtain the visual saliency information, especially the macroscopic information to be analyzed. The visual saliency information is used as a distribution map of homogeneity weight, where each pixel is given a weight. This weight acts as one of the merging constraints in the multi- scale image segmentation. As a result, pixels that macroscopically belong to the same object but are locally different can be more likely assigned to one same object. In addition, due to the constraint of visual saliency model, the constraint ability over local-macroscopic characteristics can be well controlled during the segmentation process based on different objects. These controls will improve the completeness of visually saliency areas in the segmentation results while diluting the controlling effect for non- saliency background areas. Experiments show that this method works better for texture image segmentation than traditional multi-scale image segmentation methods, and can enable us to give priority control to the saliency objects of interest. This method has been used in image quality evaluation, scattered residential area extraction, sparse forest extraction and other applications to verify its validation. All applications showed good results.
Cool Apps: Productivity at Your Fingertips
ERIC Educational Resources Information Center
Flaherty, Bill
2013-01-01
In addition to listing apps and their value, this article focuses on ways people can be more productive by adopting certain workflows in several ways. Apps listed herein include those useful in calendaring, printing, photo-editing, image-recognition, image scanning, electronic signatures, and making and sharing lists and notes.
Sustainability and business: what is green corporate image?
NASA Astrophysics Data System (ADS)
Bathmanathan, Vathana; Hironaka, Chikako
2016-03-01
Green corporate image is reckon to be the driving factor in the current business setups. Stakeholder’s green perception of the firm encourages growth of businesses. Organisation is moving from conventional businesses to running businesses with sustainable agenda that creates values to their brand. This paper analyses several green corporate image initiatives and concepts by various researches and shares how this can be essential for business.
USDA-ARS?s Scientific Manuscript database
Although conventional high-altitude airborne remote sensing and low-altitude unmanned aerial system (UAS) based remote sensing share many commonalities, one of the major differences between the two remote sensing platforms is that the latter has much smaller image footprint. To cover the same area o...
ERIC Educational Resources Information Center
Claxton, Laura J.
2011-01-01
Previous studies have found that preschoolers are confused about the relationship between two-dimensional (2D) symbols and their referents. Preschoolers report that 2D images (e.g. televised images and photographs) share some of the characteristics of the objects they are representing. A novel Comparison Task was created to test what might account…
NASA Astrophysics Data System (ADS)
Prodanovic, M.; Esteva, M.; Ketcham, R. A.
2017-12-01
Nanometer to centimeter-scale imaging such as (focused ion beam) scattered electron microscopy, magnetic resonance imaging and X-ray (micro)tomography has since 1990s introduced 2D and 3D datasets of rock microstructure that allow investigation of nonlinear flow and mechanical phenomena on the length scales that are otherwise impervious to laboratory measurements. The numerical approaches that use such images produce various upscaled parameters required by subsurface flow and deformation simulators. All of this has revolutionized our knowledge about grain scale phenomena. However, a lack of data-sharing infrastructure among research groups makes it difficult to integrate different length scales. We have developed a sustainable, open and easy-to-use repository called the Digital Rocks Portal (https://www.digitalrocksportal.org), that (1) organizes images and related experimental measurements of different porous materials, (2) improves access to them for a wider community of engineering or geosciences researchers not necessarily trained in computer science or data analysis. Digital Rocks Portal (NSF EarthCube Grant 1541008) is the first repository for imaged porous microstructure data. It is implemented within the reliable, 24/7 maintained High Performance Computing Infrastructure supported by the Texas Advanced Computing Center (University of Texas at Austin). Long-term storage is provided through the University of Texas System Research Cyber-infrastructure initiative. We show how the data can be documented, referenced in publications via digital object identifiers (see Figure below for examples), visualized, searched for and linked to other repositories. We show recently implemented integration of the remote parallel visualization, bulk upload for large datasets as well as preliminary flow simulation workflow with the pore structures currently stored in the repository. We discuss the issues of collecting correct metadata, data discoverability and repository sustainability.
Relevance of eHealth standards for big data interoperability in radiology and beyond.
Marcheschi, Paolo
2017-06-01
The aim of this paper is to report on the implementation of radiology and related information technology standards to feed big data repositories and so to be able to create a solid substrate on which to operate with analysis software. Digital Imaging and Communications in Medicine (DICOM) and Health Level 7 (HL7) are the major standards for radiology and medical information technology. They define formats and protocols to transmit medical images, signals, and patient data inside and outside hospital facilities. These standards can be implemented but big data expectations are stimulating a new approach, simplifying data collection and interoperability, seeking reduction of time to full implementation inside health organizations. Virtual Medical Record, DICOM Structured Reporting and HL7 Fast Healthcare Interoperability Resources (FHIR) are changing the way medical data are shared among organization and they will be the keys to big data interoperability. Until we do not find simple and comprehensive methods to store and disseminate detailed information on the patient's health we will not be able to get optimum results from the analysis of those data.
NASA Astrophysics Data System (ADS)
Dutton, Gregory
Forensic science is a collection of applied disciplines that draws from all branches of science. A key question in forensic analysis is: to what degree do a piece of evidence and a known reference sample share characteristics? Quantification of similarity, estimation of uncertainty, and determination of relevant population statistics are of current concern. A 2016 PCAST report questioned the foundational validity and the validity in practice of several forensic disciplines, including latent fingerprints, firearms comparisons and DNA mixture interpretation. One recommendation was the advancement of objective, automated comparison methods based on image analysis and machine learning. These concerns parallel the National Institute of Justice's ongoing R&D investments in applied chemistry, biology and physics. NIJ maintains a funding program spanning fundamental research with potential for forensic application to the validation of novel instruments and methods. Since 2009, NIJ has funded over 179M in external research to support the advancement of accuracy, validity and efficiency in the forensic sciences. An overview of NIJ's programs will be presented, with examples of relevant projects from fluid dynamics, 3D imaging, acoustics, and materials science.
Erberich, Stephan G; Bhandekar, Manasee; Chervenak, Ann; Kesselman, Carl; Nelson, Marvin D
2007-01-01
Functional MRI is successfully being used in clinical and research applications including preoperative planning, language mapping, and outcome monitoring. However, clinical use of fMRI is less widespread due to its complexity of imaging, image workflow, post-processing, and lack of algorithmic standards hindering result comparability. As a consequence, wide-spread adoption of fMRI as clinical tool is low contributing to the uncertainty of community physicians how to integrate fMRI into practice. In addition, training of physicians with fMRI is in its infancy and requires clinical and technical understanding. Therefore, many institutions which perform fMRI have a team of basic researchers and physicians to perform fMRI as a routine imaging tool. In order to provide fMRI as an advanced diagnostic tool to the benefit of a larger patient population, image acquisition and image post-processing must be streamlined, standardized, and available at any institution which does not have these resources available. Here we describe a software architecture, the functional imaging laboratory (funcLAB/G), which addresses (i) standardized image processing using Statistical Parametric Mapping and (ii) its extension to secure sharing and availability for the community using standards-based Grid technology (Globus Toolkit). funcLAB/G carries the potential to overcome the limitations of fMRI in clinical use and thus makes standardized fMRI available to the broader healthcare enterprise utilizing the Internet and HealthGrid Web Services technology.
A Secret 3D Model Sharing Scheme with Reversible Data Hiding Based on Space Subdivision
NASA Astrophysics Data System (ADS)
Tsai, Yuan-Yu
2016-03-01
Secret sharing is a highly relevant research field, and its application to 2D images has been thoroughly studied. However, secret sharing schemes have not kept pace with the advances of 3D models. With the rapid development of 3D multimedia techniques, extending the application of secret sharing schemes to 3D models has become necessary. In this study, an innovative secret 3D model sharing scheme for point geometries based on space subdivision is proposed. Each point in the secret point geometry is first encoded into a series of integer values that fall within [0, p - 1], where p is a predefined prime number. The share values are derived by substituting the specified integer values for all coefficients of the sharing polynomial. The surface reconstruction and the sampling concepts are then integrated to derive a cover model with sufficient model complexity for each participant. Finally, each participant has a separate 3D stego model with embedded share values. Experimental results show that the proposed technique supports reversible data hiding and the share values have higher levels of privacy and improved robustness. This technique is simple and has proven to be a feasible secret 3D model sharing scheme.
Segmenting root systems in xray computed tomography images using level sets
USDA-ARS?s Scientific Manuscript database
The segmentation of plant roots from soil and other growing mediums in xray computed tomography images is needed to effectively study the shapes of roots without excavation. However, segmentation is a challenging problem in this context because the root and non-root regions share similar features. ...
Text recognition and correction for automated data collection by mobile devices
NASA Astrophysics Data System (ADS)
Ozarslan, Suleyman; Eren, P. Erhan
2014-03-01
Participatory sensing is an approach which allows mobile devices such as mobile phones to be used for data collection, analysis and sharing processes by individuals. Data collection is the first and most important part of a participatory sensing system, but it is time consuming for the participants. In this paper, we discuss automatic data collection approaches for reducing the time required for collection, and increasing the amount of collected data. In this context, we explore automated text recognition on images of store receipts which are captured by mobile phone cameras, and the correction of the recognized text. Accordingly, our first goal is to evaluate the performance of the Optical Character Recognition (OCR) method with respect to data collection from store receipt images. Images captured by mobile phones exhibit some typical problems, and common image processing methods cannot handle some of them. Consequently, the second goal is to address these types of problems through our proposed Knowledge Based Correction (KBC) method used in support of the OCR, and also to evaluate the KBC method with respect to the improvement on the accurate recognition rate. Results of the experiments show that the KBC method improves the accurate data recognition rate noticeably.
A review of anisotropic conductivity models of brain white matter based on diffusion tensor imaging.
Wu, Zhanxiong; Liu, Yang; Hong, Ming; Yu, Xiaohui
2018-06-01
The conductivity of brain tissues is not only essential for electromagnetic source estimation (ESI), but also a key reflector of the brain functional changes. Different from the other brain tissues, the conductivity of whiter matter (WM) is highly anisotropic and a tensor is needed to describe it. The traditional electrical property imaging methods, such as electrical impedance tomography (EIT) and magnetic resonance electrical impedance tomography (MREIT), usually fail to image the anisotropic conductivity tensor of WM with high spatial resolution. The diffusion tensor imaging (DTI) is a newly developed technique that can fulfill this purpose. This paper reviews the existing anisotropic conductivity models of WM based on the DTI and discusses their advantages and disadvantages, as well as identifies opportunities for future research on this subject. It is crucial to obtain the linear conversion coefficient between the eigenvalues of anisotropic conductivity tensor and diffusion tensor, since they share the same eigenvectors. We conclude that the electrochemical model is suitable for ESI analysis because the conversion coefficient can be directly obtained from the concentration of ions in extracellular liquid and that the volume fraction model is appropriate to study the influence of WM structural changes on electrical conductivity. Graphical abstract ᅟ.
Liu, Danzhou; Hua, Kien A; Sugaya, Kiminobu
2008-09-01
With the advances in medical imaging devices, large volumes of high-resolution 3-D medical image data have been produced. These high-resolution 3-D data are very large in size, and severely stress storage systems and networks. Most existing Internet-based 3-D medical image interactive applications therefore deal with only low- or medium-resolution image data. While it is possible to download the whole 3-D high-resolution image data from the server and perform the image visualization and analysis at the client site, such an alternative is infeasible when the high-resolution data are very large, and many users concurrently access the server. In this paper, we propose a novel framework for Internet-based interactive applications of high-resolution 3-D medical image data. Specifically, we first partition the whole 3-D data into buckets, remove the duplicate buckets, and then, compress each bucket separately. We also propose an index structure for these buckets to efficiently support typical queries such as 3-D slicer and region of interest, and only the relevant buckets are transmitted instead of the whole high-resolution 3-D medical image data. Furthermore, in order to better support concurrent accesses and to improve the average response time, we also propose techniques for efficient query processing, incremental transmission, and client sharing. Our experimental study in simulated and realistic environments indicates that the proposed framework can significantly reduce storage and communication requirements, and can enable real-time interaction with remote high-resolution 3-D medical image data for many concurrent users.
Data integration: Combined imaging and electrophysiology data in the cloud.
Kini, Lohith G; Davis, Kathryn A; Wagenaar, Joost B
2016-01-01
There has been an increasing effort to correlate electrophysiology data with imaging in patients with refractory epilepsy over recent years. IEEG.org provides a free-access, rapidly growing archive of imaging data combined with electrophysiology data and patient metadata. It currently contains over 1200 human and animal datasets, with multiple data modalities associated with each dataset (neuroimaging, EEG, EKG, de-identified clinical and experimental data, etc.). The platform is developed around the concept that scientific data sharing requires a flexible platform that allows sharing of data from multiple file formats. IEEG.org provides high- and low-level access to the data in addition to providing an environment in which domain experts can find, visualize, and analyze data in an intuitive manner. Here, we present a summary of the current infrastructure of the platform, available datasets and goals for the near future. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Lin, Yueguan; Wang, Wei; Wen, Qi; Huang, He; Lin, Jingli; Zhang, Wei
2015-12-01
Ms8.0 Wenchuan earthquake that occurred on May 12, 2008 brought huge casualties and property losses to the Chinese people, and Beichuan County was destroyed in the earthquake. In order to leave a site for commemorate of the people, and for science propaganda and research of earthquake science, Beichuan National Earthquake Ruins Museum has been built on the ruins of Beichuan county. Based on the demand for digital preservation of the earthquake ruins park and collection of earthquake damage assessment of research and data needs, we set up a data set of Beichuan National Earthquake Ruins Museum, including satellite remote sensing image, airborne remote sensing image, ground photogrammetry data and ground acquisition data. At the same time, in order to make a better service for earthquake science research, we design the sharing ideas and schemes for this scientific data set.
Data integration: Combined Imaging and Electrophysiology data in the cloud
Kini, Lohith G.; Davis, Kathryn A.; Wagenaar, Joost B.
2015-01-01
There has been an increasing effort to correlate electrophysiology data with imaging in patients with refractory epilepsy over recent years. IEEG.org provides a free-access, rapidly growing archive of imaging data combined with electrophysiology data and patient metadata. It currently contains over 1200 human and animal datasets, with multiple data modalities associated with each dataset (neuroimaging, EEG, EKG, de-identified clinical and experimental data, etc.). The platform is developed around the concept that scientific data sharing requires a flexible platform that allows sharing of data from multiple file-formats. IEEG.org provides high and low-level access to the data in addition to providing an environment in which domain experts can find, visualize, and analyze data in an intuitive manner. Here, we present a summary of the current infrastructure of the platform, available datasets and goals for the near future. PMID:26044858
What types of astronomy images are most popular?
NASA Astrophysics Data System (ADS)
Allen, Alice; Bonnell, Jerry T.; Connelly, Paul; Haring, Ralf; Lowe, Stuart R.; Nemiroff, Robert J.
2015-01-01
Stunning imagery helps make astronomy one of the most popular sciences -- but what types of astronomy images are most popular? To help answer this question, public response to images posted to various public venues of the Astronomy Picture of the Day (APOD) are investigated. APOD portals queried included the main NASA website and the social media mirrors on Facebook, Google Plus, and Twitter. Popularity measures include polls, downloads, page views, likes, shares, and retweets; these measures are used to assess how image popularity varies in relation to various image attributes including topic and topicality.
Re-engineering the process of medical imaging physics and technology education and training.
Sprawls, Perry
2005-09-01
The extensive availability of digital technology provides an opportunity for enhancing both the effectiveness and efficiency of virtually all functions in the process of medical imaging physics and technology education and training. This includes degree granting academic programs within institutions and a wide spectrum of continuing education lifelong learning activities. Full achievement of the advantages of technology-enhanced education (e-learning, etc.) requires an analysis of specific educational activities with respect to desired outcomes and learning objectives. This is followed by the development of strategies and resources that are based on established educational principles. The impact of contemporary technology comes from its ability to place learners into enriched learning environments. The full advantage of a re-engineered and implemented educational process involves changing attitudes and functions of learning facilitators (teachers) and resource allocation and sharing both within and among institutions.
Sadigh, Gelareh; Carlos, Ruth C; Krupinski, Elizabeth A; Meltzer, Carolyn C; Duszak, Richard
2017-11-01
The purpose of this article is to review the literature on communicating transparency in health care pricing, both overall and specifically for medical imaging. Focus is also placed on the imperatives and initiatives that will increasingly impact radiologists and their patients. Most Americans seek transparency in health care pricing, yet such discussions occur in fewer than half of patient encounters. Although price transparency tools can help decrease health care spending, most are used infrequently and most lack information about quality. Given the high costs associated with many imaging services, radiologists should be aware of such initiatives to optimize patient engagement and informed shared decision making.
Picking Up Artifacts: Storyboarding as a Gateway to Reuse
NASA Astrophysics Data System (ADS)
Wahid, Shahtab; Branham, Stacy M.; Cairco, Lauren; McCrickard, D. Scott; Harrison, Steve
Storyboarding offers designers the opportunity to illustrate a visual narrative of use. Because designers often refer to past ideas, we argue storyboards can be constructed by reusing shared artifacts. We present a study in which we explore how designers reuse artifacts consisting of images and rationale during storyboard construction. We find images can aid in accessing rationale and that connections among features aid in deciding what to reuse, creating new artifacts, and constructing. Based on requirements derived from our findings, we present a storyboarding tool, PIC-UP, to facilitate artifact sharing and reuse and evaluate its use in an exploratory study. We conclude with remarks on facilitating reuse and future work.
NASA Astrophysics Data System (ADS)
Luo, D.; Cai, F.
2017-12-01
Small-scale and high-resolution marine sparker multi-channel seismic surveys using large energy sparkers are characterized by a high dominant frequency of the seismic source, wide bandwidth, and a high resolution. The technology with a high-resolution and high-detection precision was designed to improve the imaging quality of shallow sedimentary. In the study, a 20KJ sparker and 24-channel streamer cable with a 6.25m group interval were used as a seismic source and receiver system, respectively. Key factors for seismic imaging of gas hydrate are enhancement of S/N ratio, amplitude compensation and detailed velocity analysis. However, the data in this study has some characteristics below: 1. Small maximum offsets are adverse to velocity analysis and multiple attenuation. 2. Lack of low frequency information, that is, information less than 100Hz are invisible. 3. Low S/N ratio since less coverage times (only 12 times). These characteristics make it difficult to reach the targets of seismic imaging. In the study, the target processing methods are used to improve the seismic imaging quality of gas hydrate. First, some technologies of noise suppression are combined used in pre-stack seismic data to suppression of seismic noise and improve the S/N ratio. These technologies including a spectrum sharing noise elimination method, median filtering and exogenous interference suppression method. Second, the combined method of three technologies including SRME, τ-p deconvolution and high precision Radon transformation is used to remove multiples. Third, accurate velocity field are used in amplitude energy compensation to highlight the Bottom Simulating Reflector (short for BSR, the indicator of gas hydrates) and gas migration pathways (such as gas chimneys, hot spots et al.). Fourth, fine velocity analysis technology are used to improve accuracy of velocity analysis. Fifth, pre-stack deconvolution processing technology is used to compensate for low frequency energy and suppress of ghost, thus formation reflection characteristics are highlighted. The result shows that the small-scale and high resolution marine sparker multi-channel seismic surveys are very effective in improving the resolution and quality of gas hydrate imaging than the conventional seismic acquisition technology.
IMAGES: An interactive image processing system
NASA Technical Reports Server (NTRS)
Jensen, J. R.
1981-01-01
The IMAGES interactive image processing system was created specifically for undergraduate remote sensing education in geography. The system is interactive, relatively inexpensive to operate, almost hardware independent, and responsive to numerous users at one time in a time-sharing mode. Most important, it provides a medium whereby theoretical remote sensing principles discussed in lecture may be reinforced in laboratory as students perform computer-assisted image processing. In addition to its use in academic and short course environments, the system has also been used extensively to conduct basic image processing research. The flow of information through the system is discussed including an overview of the programs.
Instrumentation in Diffuse Optical Imaging
Zhang, Xiaofeng
2014-01-01
Diffuse optical imaging is highly versatile and has a very broad range of applications in biology and medicine. It covers diffuse optical tomography, fluorescence diffuse optical tomography, bioluminescence, and a number of other new imaging methods. These methods of diffuse optical imaging have diversified instrument configurations but share the same core physical principle – light propagation in highly diffusive media, i.e., the biological tissue. In this review, the author summarizes the latest development in instrumentation and methodology available to diffuse optical imaging in terms of system architecture, light source, photo-detection, spectral separation, signal modulation, and lastly imaging contrast. PMID:24860804
Martins, Ana; Taylor, Rachel M; Lobel, Brian; McCann, Beth; Soanes, Louise; Whelan, Jeremy S; Fern, Lorna A
2018-05-09
Discovering sexuality and romantic relationships are important development milestones in adolescence and young adulthood. A cancer diagnosis imposes obstacles for young people such as changes in their sexual function due to the disease and/or side effects of treatment, body image concerns, and interpersonal relationship difficulties. This can cause psychological distress and can impact on quality of life. We aimed to explore sexual health information and support needs of adolescents and young adults with cancer. Five young people aged 16-24 years, with a previous cancer diagnosis when aged 13-22 years, attended an in-depth 4-hour workshop. The framework approach was used to analyze workshop transcripts. Three overarching themes emerged: (i) information sharing; (ii) contexts and relationships (influencing factors); and (iii) information sharing preferences. Information shared by healthcare professionals was focused on a medicalized view of sex with symptoms, infection control, and protected sex at its core. Young people had unanswered questions related to sexual function, the impact of cancer and how to manage it, and about pleasure, body image, and relationships. Parents' presence at clinical consultations inhibited discussions about sex. Young people wanted professionals who were comfortable to talk about sex with them. Young people exhibited significant unmet needs around information provision on sex, body image, and relationships. They wanted information to be given by professionals and access to online resources. Development of training for professionals and resources to support young people requires further work.
Remote control of an MR imaging study via tele-collaboration tools
NASA Astrophysics Data System (ADS)
Sullivan, John M., Jr.; Mullen, Julia S.; Benz, Udo A.; Schmidt, Karl F.; Murugavel, Murali; Chen, Wei; Ghadyani, Hamid
2005-04-01
In contrast to traditional 'video conferencing' the Access Grid (AG), developed by Argonne National Laboratory, is a collaboration of audio, video and shared application tools which provide the 'persistent presence' of each participant. Among the shared application tools are the ability to share viewing and control of presentations, browsers, images and movies. When used in conjunction with Virtual Network Computing (VNC) software, an investigator can interact with colleagues at a remote site, and control remote systems via local keyboard and mouse commands. This combination allows for effective viewing and discussion of information, i.e. data, images, and results. It is clear that such an approach when applied to the medical sciences will provide a means by which a team of experts can not only access, but interact and control medical devices for the purpose of experimentation, diagnosis, surgery and therapy. We present the development of an application node at our 4.7 Tesla MR magnet facility, and a demonstration of remote investigator control of the magnet. A local magnet operator performs manual tasks such as loading the test subject into the magnet and administering the stimulus associated with the functional MRI study. The remote investigator has complete control of the magnet console. S/he can adjust the gradient coil settings, the pulse sequence, image capture frequency, etc. A geographically distributed audience views and interacts with the remote investigator and local MR operator. This AG demonstration of MR magnet control illuminates the potential of untethered medical experiments, procedures and training.
Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy
NASA Astrophysics Data System (ADS)
Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli
2014-03-01
One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.
Job Sharing--Opportunities or Headaches?
ERIC Educational Resources Information Center
Leighton, Patricia
1986-01-01
Discusses the issue of job sharing as a new alternative available to workers. Topics covered include (1) a profile of job sharers, (2) response to job sharing, (3) establishing a job share, (4) job sharing in operation, and (5) legal analysis of job sharing. (CH)
Public sentiment and discourse about Zika virus on Instagram.
Seltzer, E K; Horst-Martz, E; Lu, M; Merchant, R M
2017-09-01
Social media have strongly influenced the awareness and perceptions of public health emergencies, and a considerable amount of social media content is now shared through images, rather than text alone. This content can impact preparedness and response due to the popularity and real-time nature of social media platforms. We sought to explore how the image-sharing platform Instagram is used for information dissemination and conversation during the current Zika outbreak. This was a retrospective review of publicly posted images about Zika on Instagram. Using the keyword '#zika' we identified 500 images posted on Instagram from May to August 2016. Images were coded by three reviewers and contextual information was collected for each image about sentiment, image type, content, audience, geography, reliability, and engagement. Of 500 images tagged with #zika, 342 (68%) contained content actually related to Zika. Of the 342 Zika-specific images, 299 were coded as 'health' and 193 were coded 'public interest'. Some images had multiple 'health' and 'public interest' codes. Health images tagged with #zika were primarily related to transmission (43%, 129/299) and prevention (48%, 145/299). Transmission-related posts were more often mosquito-human transmission (73%, 94/129) than human-human transmission (27%, 35/129). Mosquito bite prevention posts outnumbered safe sex prevention; (84%, 122/145) and (16%, 23/145) respectively. Images with a target audience were primarily aimed at women (95%, 36/38). Many posts (60%, 61/101) included misleading, incomplete, or unclear information about the virus. Additionally, many images expressed fear and negative sentiment, (79/156, 51%). Instagram can be used to characterize public sentiment and highlight areas of focus for public health, such as correcting misleading or incomplete information or expanding messages to reach diverse audiences. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
The Image and Data Archive at the Laboratory of Neuro Imaging.
Crawford, Karen L; Neu, Scott C; Toga, Arthur W
2016-01-01
The LONI Image and Data Archive (IDA)(1) is a repository for sharing and long-term preservation of neuroimaging and biomedical research data. Originally designed to archive strictly medical image files, the IDA has evolved over the last ten years and now encompasses the storage and dissemination of neuroimaging, clinical, biospecimen, and genetic data. In this article, we report upon the genesis of the IDA and how it currently securely manages data and protects data ownership. Copyright © 2015 Elsevier Inc. All rights reserved.
Digital radiography: spatial and contrast resolution
NASA Astrophysics Data System (ADS)
Bjorkholm, Paul; Annis, M.; Frederick, E.; Stein, J.; Swift, R.
1981-07-01
The addition of digital image collection and storage to standard and newly developed x-ray imaging techniques has allowed spectacular improvements in some diagnostic procedures. There is no reason to expect that the developments in this area are yet complete. But no matter what further developments occur in this field, all the techniques will share a common element, digital image storage and processing. This common element alone determines some of the important imaging characteristics. These will be discussed using one system, the Medical MICRODOSE System as an example.
Blood pressure and cerebral white matter share common genetic factors in Mexican Americans.
Kochunov, Peter; Glahn, David C; Lancaster, Jack; Winkler, Anderson; Karlsgodt, Kathrin; Olvera, Rene L; Curran, Joanna E; Carless, Melanie A; Dyer, Thomas D; Almasy, Laura; Duggirala, Ravi; Fox, Peter T; Blangero, John
2011-02-01
Elevated arterial pulse pressure and blood pressure (BP) can lead to atrophy of cerebral white matter (WM), potentially attributable to shared genetic factors. We calculated the magnitude of shared genetic variance between BP and fractional anisotropy of water diffusion, a sensitive measurement of WM integrity in a well-characterized population of Mexican Americans. The patterns of whole-brain and regional genetic overlap between BP and fractional anisotropy were interpreted in the context the pulse-wave encephalopathy theory. We also tested whether regional pattern in genetic pleiotropy is modulated by the phylogeny of WM development. BP and high-resolution (1.7 × 1.7 × 3 mm; 55 directions) diffusion tensor imaging data were analyzed for 332 (202 females; mean age 47.9 ± 13.3 years) members of the San Antonio Family Heart Study. Bivariate genetic correlation analysis was used to calculate the genetic overlap between several BP measurements (pulse pressure, systolic BP, and diastolic BP) and fractional anisotropy (whole-brain and regional values). Intersubject variance in pulse pressure and systolic BP exhibited a significant genetic overlap with variance in whole-brain fractional anisotropy values, sharing 36% and 22% of genetic variance, respectively. Regionally, shared genetic variance was significantly influenced by rates of WM development (r=-0.75; P=0.01). The pattern of genetic overlap between BP and WM integrity was generally in agreement with the pulse-wave encephalopathy theory. Our study provides evidence that a set of pleiotropically acting genetic factors jointly influence phenotypic variation in BP and WM integrity. The magnitude of this overlap appears to be influenced by phylogeny of WM development, suggesting a possible role for genotype-by-age interactions.
Blood Pressure and Cerebral White Matter Share Common Genetic Factors in Mexican-Americans
Kochunov, Peter; Glahn, David C; Lancaster, Jack; Winkler, Anderson; Karlsgodt, Kathrin; Olvera, Rene L; Curran, Joanna E; Carless, Melanie A; Dyer, Thomas D; Almasy, Laura; Duggirala, Ravi; Fox, Peter T; Blangero, John
2010-01-01
Elevated arterial pulse pressure (PP) and blood pressure (BP) can lead to atrophy of cerebral white matter (WM), potentially due to shared genetic factors. We calculated the magnitude of shared genetic variance between BP and fractional anisotropy (FA) of water diffusion, a sensitive measurement of WM integrity in a well-characterized population of Mexican-Americans. The patterns of whole-brain and regional genetic overlap between BP and FA were interpreted in the context the pulse-wave encephalopathy (PWE) theory. We also tested whether regional pattern in genetic pleiotropy is modulated by the phylogeny of WM development. BP and high-resolution (1.7×1.7×3mm, 55 directions) diffusion tensor imaging (DTI) data were analyzed for 332 (202 females; mean age=47.9±13.3years) members of the San Antonio Family Heart Study. Bivariate genetic correlation analysis was used to calculate the genetic overlap between several BP measurements [PP, systolic (SBP) and diastolic (DBP)] and FA (whole-brain and regional values). Intersubject variance in PP and SBP exhibited a significant genetic overlap with variance in whole-brain FA values, sharing 36% and 22% of genetic variance, respectively. Regionally, shared genetic variance was significantly influenced by rates of WM development (r=−.75, p=0.01). The pattern of genetic overlap between BP and WM integrity was generally in-agreement with the PWE theory. Our study provides evidence that a set of pleiotropically acting genetic factors jointly influence phenotypic variation in BP and WM integrity. The magnitude of this overlap appears to be influenced by phylogeny of WM development suggesting a possible role for genotype-by-age interactions. PMID:21135356
The Development of GIS Educational Resources Sharing among Central Taiwan Universities
NASA Astrophysics Data System (ADS)
Chou, T.-Y.; Yeh, M.-L.; Lai, Y.-C.
2011-09-01
Using GIS in the classroom enhance students' computer skills and explore the range of knowledge. The paper highlights GIS integration on e-learning platform and introduces a variety of abundant educational resources. This research project will demonstrate tools for e-learning environment and delivers some case studies for learning interaction from Central Taiwan Universities. Feng Chia University (FCU) obtained a remarkable academic project subsidized by Ministry of Education and developed e-learning platform for excellence in teaching/learning programs among Central Taiwan's universities. The aim of the project is to integrate the educational resources of 13 universities in central Taiwan. FCU is serving as the hub of Center University. To overcome the problem of distance, e-platforms have been established to create experiences with collaboration enhanced learning. The e-platforms provide coordination of web service access among the educational community and deliver GIS educational resources. Most of GIS related courses cover the development of GIS, principles of cartography, spatial data analysis and overlaying, terrain analysis, buffer analysis, 3D GIS application, Remote Sensing, GPS technology, and WebGIS, MobileGIS, ArcGIS manipulation. In each GIS case study, students have been taught to know geographic meaning, collect spatial data and then use ArcGIS software to analyze spatial data. On one of e-Learning platforms provide lesson plans and presentation slides. Students can learn Arc GIS online. As they analyze spatial data, they can connect to GIS hub to get data they need including satellite images, aerial photos, and vector data. Moreover, e-learning platforms provide solutions and resources. Different levels of image scales have been integrated into the systems. Multi-scale spatial development and analyses in Central Taiwan integrate academic research resources among CTTLRC partners. Thus, establish decision-making support mechanism in teaching and learning. Accelerate communication, cooperation and sharing among academic units
Göbl, Rüdiger; Navab, Nassir; Hennersperger, Christoph
2018-06-01
Research in ultrasound imaging is limited in reproducibility by two factors: First, many existing ultrasound pipelines are protected by intellectual property, rendering exchange of code difficult. Second, most pipelines are implemented in special hardware, resulting in limited flexibility of implemented processing steps on such platforms. With SUPRA, we propose an open-source pipeline for fully software-defined ultrasound processing for real-time applications to alleviate these problems. Covering all steps from beamforming to output of B-mode images, SUPRA can help improve the reproducibility of results and make modifications to the image acquisition mode accessible to the research community. We evaluate the pipeline qualitatively, quantitatively, and regarding its run time. The pipeline shows image quality comparable to a clinical system and backed by point spread function measurements a comparable resolution. Including all processing stages of a usual ultrasound pipeline, the run-time analysis shows that it can be executed in 2D and 3D on consumer GPUs in real time. Our software ultrasound pipeline opens up the research in image acquisition. Given access to ultrasound data from early stages (raw channel data, radiofrequency data), it simplifies the development in imaging. Furthermore, it tackles the reproducibility of research results, as code can be shared easily and even be executed without dedicated ultrasound hardware.
Regional SAR Image Segmentation Based on Fuzzy Clustering with Gamma Mixture Model
NASA Astrophysics Data System (ADS)
Li, X. L.; Zhao, Q. H.; Li, Y.
2017-09-01
Most of stochastic based fuzzy clustering algorithms are pixel-based, which can not effectively overcome the inherent speckle noise in SAR images. In order to deal with the problem, a regional SAR image segmentation algorithm based on fuzzy clustering with Gamma mixture model is proposed in this paper. First, initialize some generating points randomly on the image, the image domain is divided into many sub-regions using Voronoi tessellation technique. Each sub-region is regarded as a homogeneous area in which the pixels share the same cluster label. Then, assume the probability of the pixel to be a Gamma mixture model with the parameters respecting to the cluster which the pixel belongs to. The negative logarithm of the probability represents the dissimilarity measure between the pixel and the cluster. The regional dissimilarity measure of one sub-region is defined as the sum of the measures of pixels in the region. Furthermore, the Markov Random Field (MRF) model is extended from pixels level to Voronoi sub-regions, and then the regional objective function is established under the framework of fuzzy clustering. The optimal segmentation results can be obtained by the solution of model parameters and generating points. Finally, the effectiveness of the proposed algorithm can be proved by the qualitative and quantitative analysis from the segmentation results of the simulated and real SAR images.
Pediatric Traumatic Brain Injury and Autism: Elucidating Shared Mechanisms
Singh, Rahul; Nguyen, Linda; Motwani, Kartik; Swatek, Michelle
2016-01-01
Pediatric traumatic brain injury (TBI) and autism spectrum disorder (ASD) are two serious conditions that affect youth. Recent data, both preclinical and clinical, show that pediatric TBI and ASD share not only similar symptoms but also some of the same biologic mechanisms that cause these symptoms. Prominent symptoms for both disorders include gastrointestinal problems, learning difficulties, seizures, and sensory processing disruption. In this review, we highlight some of these shared mechanisms in order to discuss potential treatment options that might be applied for each condition. We discuss potential therapeutic and pharmacologic options as well as potential novel drug targets. Furthermore, we highlight advances in understanding of brain circuitry that is being propelled by improved imaging modalities. Going forward, advanced imaging will help in diagnosis and treatment planning strategies for pediatric patients. Lessons from each field can be applied to design better and more rigorous trials that can be used to improve guidelines for pediatric patients suffering from TBI or ASD. PMID:28074078
Spectral-Spatial Shared Linear Regression for Hyperspectral Image Classification.
Haoliang Yuan; Yuan Yan Tang
2017-04-01
Classification of the pixels in hyperspectral image (HSI) is an important task and has been popularly applied in many practical applications. Its major challenge is the high-dimensional small-sized problem. To deal with this problem, lots of subspace learning (SL) methods are developed to reduce the dimension of the pixels while preserving the important discriminant information. Motivated by ridge linear regression (RLR) framework for SL, we propose a spectral-spatial shared linear regression method (SSSLR) for extracting the feature representation. Comparing with RLR, our proposed SSSLR has the following two advantages. First, we utilize a convex set to explore the spatial structure for computing the linear projection matrix. Second, we utilize a shared structure learning model, which is formed by original data space and a hidden feature space, to learn a more discriminant linear projection matrix for classification. To optimize our proposed method, an efficient iterative algorithm is proposed. Experimental results on two popular HSI data sets, i.e., Indian Pines and Salinas demonstrate that our proposed methods outperform many SL methods.
2009-01-01
Background In recent years, the genome biology community has expended considerable effort to confront the challenges of managing heterogeneous data in a structured and organized way and developed laboratory information management systems (LIMS) for both raw and processed data. On the other hand, electronic notebooks were developed to record and manage scientific data, and facilitate data-sharing. Software which enables both, management of large datasets and digital recording of laboratory procedures would serve a real need in laboratories using medium and high-throughput techniques. Results We have developed iLAP (Laboratory data management, Analysis, and Protocol development), a workflow-driven information management system specifically designed to create and manage experimental protocols, and to analyze and share laboratory data. The system combines experimental protocol development, wizard-based data acquisition, and high-throughput data analysis into a single, integrated system. We demonstrate the power and the flexibility of the platform using a microscopy case study based on a combinatorial multiple fluorescence in situ hybridization (m-FISH) protocol and 3D-image reconstruction. iLAP is freely available under the open source license AGPL from http://genome.tugraz.at/iLAP/. Conclusion iLAP is a flexible and versatile information management system, which has the potential to close the gap between electronic notebooks and LIMS and can therefore be of great value for a broad scientific community. PMID:19941647
Baby Boomers in an Active Adult Retirement Community: Comity Interrupted
Roth, Erin G.; Keimig, Lynn; Rubinstein, Robert L.; Morgan, Leslie; Eckert, J. Kevin; Goldman, Susan; Peeples, Amanda D.
2012-01-01
Purpose of the Study: This article explores a clash between incoming Baby Boomers and older residents in an active adult retirement community (AARC). We examine issues of social identity and attitudes as these groups encounter each other. Design and Methods: Data are drawn from a multiyear ethnographic study of social relations in senior housing. Research at this site included in-depth, open-ended interviews (47), field notes (25), and participant observation in the field (500 hr). Research team biweekly discussions and Atlas.ti software program facilitated analysis. Findings: We begin with a poignant incident that has continued to engender feelings of rejection by elders with each retelling and suggests the power and prevalence of ageism in this AARC. We identify three pervasive themes: (a) social identity and image matter, (b) significant cultural and attitudinal differences exist between Boomers and older residents, and (c) shared age matters less than shared interests. Implications: Our data clearly show the operation of ageism in this community and an equating of being old with being sick. The conflict between these two age cohorts suggests that cohort consciousness among Boomers carries elements of age denial, shared by the older old. It also challenges the Third Age concept as a generational phenomenon. PMID:22391870
Bivariate Heritability of Total and Regional Brain Volumes: the Framingham Study
DeStefano, Anita L.; Seshadri, Sudha; Beiser, Alexa; Atwood, Larry D.; Massaro, Joe M.; Au, Rhoda; Wolf, Philip A.; DeCarli, Charles
2009-01-01
Heritability and genetic and environmental correlations of total and regional brain volumes were estimated from a large, generally healthy, community-based sample, to determine if there are common elements to the genetic influence of brain volumes and white matter hyperintensity volume. There were 1538 Framingham Heart Study participants with brain volume measures from quantitative magnetic resonance imaging (MRI) who were free of stroke and other neurological disorders that might influence brain volumes and who were members of families with at least two Framingham Heart Study participants. Heritability was estimated using variance component methodology and adjusting for the components of the Framingham stroke risk profile. Genetic and environmental correlations between traits were obtained from bivariate analysis. Heritability estimates ranging from 0.46 to 0.60, were observed for total brain, white matter hyperintensity, hippocampal, temporal lobe, and lateral ventricular volumes. Moderate, yet significant, heritability was observed for the other measures. Bivariate analyses demonstrated that relationships between brain volume measures, except for white matter hyperintensity, reflected both moderate to strong shared genetic and shared environmental influences. This study confirms strong genetic effects on brain and white matter hyperintensity volumes. These data extend current knowledge by showing that these two different types of MRI measures do not share underlying genetic or environmental influences. PMID:19812462
A Quantitative Framework for Flower Phenotyping in Cultivated Carnation (Dianthus caryophyllus L.)
Chacón, Borja; Ballester, Roberto; Birlanga, Virginia; Rolland-Lagan, Anne-Gaëlle; Pérez-Pérez, José Manuel
2013-01-01
Most important breeding goals in ornamental crops are plant appearance and flower characteristics where selection is visually performed on direct offspring of crossings. We developed an image analysis toolbox for the acquisition of flower and petal images from cultivated carnation (Dianthus caryophyllus L.) that was validated by a detailed analysis of flower and petal size and shape in 78 commercial cultivars of D. caryophyllus, including 55 standard, 22 spray and 1 pot carnation cultivars. Correlation analyses allowed us to reduce the number of parameters accounting for the observed variation in flower and petal morphology. Convexity was used as a descriptor for the level of serration in flowers and petals. We used a landmark-based approach that allowed us to identify eight main principal components (PCs) accounting for most of the variance observed in petal shape. The effect and the strength of these PCs in standard and spray carnation cultivars are consistent with shared underlying mechanisms involved in the morphological diversification of petals in both subpopulations. Our results also indicate that neighbor-joining trees built with morphological data might infer certain phylogenetic relationships among carnation cultivars. Based on estimated broad-sense heritability values for some flower and petal features, different genetic determinants shall modulate the responses of flower and petal morphology to environmental cues in this species. We believe our image analysis toolbox could allow capturing flower variation in other species of high ornamental value. PMID:24349209
Quantum image coding with a reference-frame-independent scheme
NASA Astrophysics Data System (ADS)
Chapeau-Blondeau, François; Belin, Etienne
2016-07-01
For binary images, or bit planes of non-binary images, we investigate the possibility of a quantum coding decodable by a receiver in the absence of reference frames shared with the emitter. Direct image coding with one qubit per pixel and non-aligned frames leads to decoding errors equivalent to a quantum bit-flip noise increasing with the misalignment. We show the feasibility of frame-invariant coding by using for each pixel a qubit pair prepared in one of two controlled entangled states. With just one common axis shared between the emitter and receiver, exact decoding for each pixel can be obtained by means of two two-outcome projective measurements operating separately on each qubit of the pair. With strictly no alignment information between the emitter and receiver, exact decoding can be obtained by means of a two-outcome projective measurement operating jointly on the qubit pair. In addition, the frame-invariant coding is shown much more resistant to quantum bit-flip noise compared to the direct non-invariant coding. For a cost per pixel of two (entangled) qubits instead of one, complete frame-invariant image coding and enhanced noise resistance are thus obtained.
NASA Astrophysics Data System (ADS)
You, Xiaozhen; Yao, Zhihong
2005-04-01
As a standard of communication and storage for medical digital images, DICOM has been playing a very important role in integration of hospital information. In DICOM, tags are expressed by numbers, and only standard data elements can be shared by looking up Data Dictionary while private tags can not. As such, a DICOM file's readability and extensibility is limited. In addition, reading DICOM files needs special software. In our research, we introduced XML into DICOM, defining an XML-based DICOM special transfer format, XML-DCM, a DICOM storage format, X-DCM, as well as developing a program package to realize format interchange among DICOM, XML-DCM, and X-DCM. XML-DCM is based on the DICOM structure while replacing numeric tags with accessible XML character string tags. The merits are as following: a) every character string tag of XML-DCM has explicit meaning, so users can understand standard data elements and those private data elements easily without looking up the Data Dictionary. In this way, the readability and data sharing of DICOM files are greatly improved; b) According to requirements, users can set new character string tags with explicit meaning to their own system to extend the capacity of data elements; c) User can read the medical image and associated information conveniently through IE, ultimately enlarging the scope of data sharing. The application of storage format X-DCM will reduce data redundancy and save storage memory. The result of practical application shows that XML-DCM does favor integration and share of medical image data among different systems or devices.
MGH-USC Human Connectome Project datasets with ultra-high b-value diffusion MRI.
Fan, Qiuyun; Witzel, Thomas; Nummenmaa, Aapo; Van Dijk, Koene R A; Van Horn, John D; Drews, Michelle K; Somerville, Leah H; Sheridan, Margaret A; Santillana, Rosario M; Snyder, Jenna; Hedden, Trey; Shaw, Emily E; Hollinshead, Marisa O; Renvall, Ville; Zanzonico, Roberta; Keil, Boris; Cauley, Stephen; Polimeni, Jonathan R; Tisdall, Dylan; Buckner, Randy L; Wedeen, Van J; Wald, Lawrence L; Toga, Arthur W; Rosen, Bruce R
2016-01-01
The MGH-USC CONNECTOM MRI scanner housed at the Massachusetts General Hospital (MGH) is a major hardware innovation of the Human Connectome Project (HCP). The 3T CONNECTOM scanner is capable of producing a magnetic field gradient of up to 300 mT/m strength for in vivo human brain imaging, which greatly shortens the time spent on diffusion encoding, and decreases the signal loss due to T2 decay. To demonstrate the capability of the novel gradient system, data of healthy adult participants were acquired for this MGH-USC Adult Diffusion Dataset (N=35), minimally preprocessed, and shared through the Laboratory of Neuro Imaging Image Data Archive (LONI IDA) and the WU-Minn Connectome Database (ConnectomeDB). Another purpose of sharing the data is to facilitate methodological studies of diffusion MRI (dMRI) analyses utilizing high diffusion contrast, which perhaps is not easily feasible with standard MR gradient system. In addition, acquisition of the MGH-Harvard-USC Lifespan Dataset is currently underway to include 120 healthy participants ranging from 8 to 90 years old, which will also be shared through LONI IDA and ConnectomeDB. Here we describe the efforts of the MGH-USC HCP consortium in acquiring and sharing the ultra-high b-value diffusion MRI data and provide a report on data preprocessing and access. We conclude with a demonstration of the example data, along with results of standard diffusion analyses, including q-ball Orientation Distribution Function (ODF) reconstruction and tractography. Copyright © 2015 Elsevier Inc. All rights reserved.
A social‐technological epistemology of clinical decision‐making as mediated by imaging
Carusi, Annamaria; Sabroe, Ian; Kiely, David G.
2016-01-01
Abstract In recent years there has been growing attention to the epistemology of clinical decision‐making, but most studies have taken the individual physicians as the central object of analysis. In this paper we argue that knowing in current medical practice has an inherently social character and that imaging plays a mediating role in these practices. We have analyzed clinical decision‐making within a medical expert team involved in diagnosis and treatment of patients with pulmonary hypertension (PH), a rare disease requiring multidisciplinary team involvement in diagnosis and management. Within our field study, we conducted observations, interviews, video tasks, and a panel discussion. Decision‐making in the PH clinic involves combining evidence from heterogeneous sources into a cohesive framing of a patient, in which interpretations of the different sources can be made consistent with each other. Because pieces of evidence are generated by people with different expertise and interpretation and adjustments take place in interaction between different experts, we argue that this process is socially distributed. Multidisciplinary team meetings are an important place where information is shared, discussed, interpreted, and adjusted, allowing for a collective way of seeing and a shared language to be developed. We demonstrate this with an example of image processing in the PH service, an instance in which knowledge is distributed over multiple people who play a crucial role in generating an evaluation of right heart function. Finally, we argue that images fulfill a mediating role in distributed knowing in 3 ways: first, as enablers or tools in acquiring information; second, as communication facilitators; and third, as pervasively framing the epistemic domain. With this study of clinical decision‐making in diagnosis and treatment of PH, we have shown that clinical decision‐making is highly social and mediated by technologies. The epistemology of clinical decision‐making needs to take social and technological mediation into account. PMID:27696641
A social-technological epistemology of clinical decision-making as mediated by imaging.
van Baalen, Sophie; Carusi, Annamaria; Sabroe, Ian; Kiely, David G
2017-10-01
In recent years there has been growing attention to the epistemology of clinical decision-making, but most studies have taken the individual physicians as the central object of analysis. In this paper we argue that knowing in current medical practice has an inherently social character and that imaging plays a mediating role in these practices. We have analyzed clinical decision-making within a medical expert team involved in diagnosis and treatment of patients with pulmonary hypertension (PH), a rare disease requiring multidisciplinary team involvement in diagnosis and management. Within our field study, we conducted observations, interviews, video tasks, and a panel discussion. Decision-making in the PH clinic involves combining evidence from heterogeneous sources into a cohesive framing of a patient, in which interpretations of the different sources can be made consistent with each other. Because pieces of evidence are generated by people with different expertise and interpretation and adjustments take place in interaction between different experts, we argue that this process is socially distributed. Multidisciplinary team meetings are an important place where information is shared, discussed, interpreted, and adjusted, allowing for a collective way of seeing and a shared language to be developed. We demonstrate this with an example of image processing in the PH service, an instance in which knowledge is distributed over multiple people who play a crucial role in generating an evaluation of right heart function. Finally, we argue that images fulfill a mediating role in distributed knowing in 3 ways: first, as enablers or tools in acquiring information; second, as communication facilitators; and third, as pervasively framing the epistemic domain. With this study of clinical decision-making in diagnosis and treatment of PH, we have shown that clinical decision-making is highly social and mediated by technologies. The epistemology of clinical decision-making needs to take social and technological mediation into account. © 2016 The Authors Journal of Evaluation in Clinical Practice Published by John Wiley & Sons Ltd.
Deep Sea Gazing: Making Ship-Based Research Aboard RV Falkor Relevant and Accessible
NASA Astrophysics Data System (ADS)
Wiener, C.; Zykov, V.; Miller, A.; Pace, L. J.; Ferrini, V. L.; Friedman, A.
2016-02-01
Schmidt Ocean Institute (SOI) is a private, non-profit operating foundation established to advance the understanding of the world's oceans through technological advancement, intelligent observation, and open sharing of information. Our research vessel Falkorprovides ship time to selected scientists and supports a wide range of scientific functions, including ROV operations with live streaming capabilities. Since 2013, SOI has live streamed 55 ROV dives in high definition and recorded them onto YouTube. This has totaled over 327 hours of video which received 1,450, 461 views in 2014. SOI is one of the only research programs that makes their entire dive series available online, creating a rich collection of video data sets. In doing this, we provide an opportunity for scientists to make new discoveries in the video data that may have been missed earlier. These data sets are also available to students, allowing them to engage with real data in the classroom. SOI's video collection is also being used in a newly developed video management system, Ocean Video Lab. Telepresence-enabled research is an important component of Falkor cruises, which is exemplified by several that were conducted in 2015. This presentation will share a few case studies including an image tagging citizen science project conducted through the Squidle interface in partnership with the Australian Center for Field Robotics. Using real-time image data collected in the Timor Sea, numerous shore-based citizens created seafloor image tags that could be used by a machine learning algorithms on Falkor's high performance computer (HPC) to accomplish habitat characterization. With the use of the HPC system real-time robot tracking, image tagging, and other outreach connections were made possible, allowing scientists on board to engage with the public and build their knowledge base. The above mentioned examples will be used to demonstrate the benefits of remote data analysis and participatory engagement in science-based telepresence.
Lefebvre, Aline; Beggiato, Anita; Bourgeron, Thomas; Toro, Roberto
2015-07-15
Patients with autism have been often reported to have a smaller corpus callosum (CC) than control subjects. We conducted a meta-analysis of the literature, analyzed the CC in 694 subjects of the Autism Brain Imaging Data Exchange project, and performed computer simulations to study the effect of different analysis strategies. Our meta-analysis suggested a group difference in CC size; however, the studies were heavily underpowered (20% power to detect Cohen's d = .3). In contrast, we did not observe significant differences in the Autism Brain Imaging Data Exchange cohort, despite having achieved 99% power. However, we observed that CC scaled nonlinearly with brain volume (BV): large brains had a proportionally smaller CC. Our simulations showed that because of this nonlinearity, CC normalization could not control for eventual BV differences, but using BV as a covariate in a linear model would. We also observed a weaker correlation of IQ and BV in cases compared with control subjects. Our simulations showed that matching populations by IQ could then induce artifactual BV differences. The lack of statistical power in the previous literature prevents us from establishing the reality of the claims of a smaller CC in autism, and our own analyses did not find any. However, the nonlinear relationship between CC and BV and the different correlation between BV and IQ in cases and control subjects may induce artifactual differences. Overall, our results highlight the necessity for open data sharing to provide a more solid ground for the discovery of neuroimaging biomarkers within the context of the wide human neuroanatomical diversity. Copyright © 2015 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
a Map Mash-Up Application: Investigation the Temporal Effects of Climate Change on Salt Lake Basin
NASA Astrophysics Data System (ADS)
Kirtiloglu, O. S.; Orhan, O.; Ekercin, S.
2016-06-01
The main purpose of this paper is to investigate climate change effects that have been occurred at the beginning of the twenty-first century at the Konya Closed Basin (KCB) located in the semi-arid central Anatolian region of Turkey and particularly in Salt Lake region where many major wetlands located in and situated in KCB and to share the analysis results online in a Web Geographical Information System (GIS) environment. 71 Landsat 5-TM, 7-ETM+ and 8-OLI images and meteorological data obtained from 10 meteorological stations have been used at the scope of this work. 56 of Landsat images have been used for extraction of Salt Lake surface area through multi-temporal Landsat imagery collected from 2000 to 2014 in Salt lake basin. 15 of Landsat images have been used to make thematic maps of Normalised Difference Vegetation Index (NDVI) in KCB, and 10 meteorological stations data has been used to generate the Standardized Precipitation Index (SPI), which was used in drought studies. For the purpose of visualizing and sharing the results, a Web GIS-like environment has been established by using Google Maps and its useful data storage and manipulating product Fusion Tables which are all Google's free of charge Web service elements. The infrastructure of web application includes HTML5, CSS3, JavaScript, Google Maps API V3 and Google Fusion Tables API technologies. These technologies make it possible to make effective "Map Mash-Ups" involving an embedded Google Map in a Web page, storing the spatial or tabular data in Fusion Tables and add this data as a map layer on embedded map. The analysing process and map mash-up application have been discussed in detail as the main sections of this paper.
NASA Astrophysics Data System (ADS)
Uchill, Joseph H.; Assadi, Amir H.
2003-01-01
The advent of the internet has opened a host of new and exciting questions in the science and mathematics of information organization and data mining. In particular, a highly ambitious promise of the internet is to bring the bulk of human knowledge to everyone with access to a computer network, providing a democratic medium for sharing and communicating knowledge regardless of the language of the communication. The development of sharing and communication of knowledge via transfer of digital files is the first crucial achievement in this direction. Nonetheless, available solutions to numerous ancillary problems remain far from satisfactory. Among such outstanding problems are the first few fundamental questions that have been responsible for the emergence and rapid growth of the new field of Knowledge Engineering, namely, classification of forms of data, their effective organization, and extraction of knowledge from massive distributed data sets, and the design of fast effective search engines. The precision of machine learning algorithms in classification and recognition of image data (e.g. those scanned from books and other printed documents) are still far from human performance and speed in similar tasks. Discriminating the many forms of ASCII data from each other is not as difficult in view of the emerging universal standards for file-format. Nonetheless, most of the past and relatively recent human knowledge is yet to be transformed and saved in such machine readable formats. In particular, an outstanding problem in knowledge engineering is the problem of organization and management--with precision comparable to human performance--of knowledge in the form of images of documents that broadly belong to either text, image or a blend of both. It was shown in that the effectiveness of OCR was intertwined with the success of language and font recognition.
From damage to discovery via virtual unwrapping: Reading the scroll from En-Gedi
Seales, William Brent; Parker, Clifford Seth; Segal, Michael; Tov, Emanuel; Shor, Pnina; Porath, Yosef
2016-01-01
Computer imaging techniques are commonly used to preserve and share readable manuscripts, but capturing writing locked away in ancient, deteriorated documents poses an entirely different challenge. This software pipeline—referred to as “virtual unwrapping”—allows textual artifacts to be read completely and noninvasively. The systematic digital analysis of the extremely fragile En-Gedi scroll (the oldest Pentateuchal scroll in Hebrew outside of the Dead Sea Scrolls) reveals the writing hidden on its untouchable, disintegrating sheets. Our approach for recovering substantial ink-based text from a damaged object results in readable columns at such high quality that serious critical textual analysis can occur. Hence, this work creates a new pathway for subsequent textual discoveries buried within the confines of damaged materials. PMID:27679821
Current and future trends in marine image annotation software
NASA Astrophysics Data System (ADS)
Gomes-Pereira, Jose Nuno; Auger, Vincent; Beisiegel, Kolja; Benjamin, Robert; Bergmann, Melanie; Bowden, David; Buhl-Mortensen, Pal; De Leo, Fabio C.; Dionísio, Gisela; Durden, Jennifer M.; Edwards, Luke; Friedman, Ariell; Greinert, Jens; Jacobsen-Stout, Nancy; Lerner, Steve; Leslie, Murray; Nattkemper, Tim W.; Sameoto, Jessica A.; Schoening, Timm; Schouten, Ronald; Seager, James; Singh, Hanumant; Soubigou, Olivier; Tojeira, Inês; van den Beld, Inge; Dias, Frederico; Tempera, Fernando; Santos, Ricardo S.
2016-12-01
Given the need to describe, analyze and index large quantities of marine imagery data for exploration and monitoring activities, a range of specialized image annotation tools have been developed worldwide. Image annotation - the process of transposing objects or events represented in a video or still image to the semantic level, may involve human interactions and computer-assisted solutions. Marine image annotation software (MIAS) have enabled over 500 publications to date. We review the functioning, application trends and developments, by comparing general and advanced features of 23 different tools utilized in underwater image analysis. MIAS requiring human input are basically a graphical user interface, with a video player or image browser that recognizes a specific time code or image code, allowing to log events in a time-stamped (and/or geo-referenced) manner. MIAS differ from similar software by the capability of integrating data associated to video collection, the most simple being the position coordinates of the video recording platform. MIAS have three main characteristics: annotating events in real time, posteriorly to annotation and interact with a database. These range from simple annotation interfaces, to full onboard data management systems, with a variety of toolboxes. Advanced packages allow to input and display data from multiple sensors or multiple annotators via intranet or internet. Posterior human-mediated annotation often include tools for data display and image analysis, e.g. length, area, image segmentation, point count; and in a few cases the possibility of browsing and editing previous dive logs or to analyze the annotations. The interaction with a database allows the automatic integration of annotations from different surveys, repeated annotation and collaborative annotation of shared datasets, browsing and querying of data. Progress in the field of automated annotation is mostly in post processing, for stable platforms or still images. Integration into available MIAS is currently limited to semi-automated processes of pixel recognition through computer-vision modules that compile expert-based knowledge. Important topics aiding the choice of a specific software are outlined, the ideal software is discussed and future trends are presented.
Unified Database for Rejected Image Analysis Across Multiple Vendors in Radiography.
Little, Kevin J; Reiser, Ingrid; Liu, Lili; Kinsey, Tiffany; Sánchez, Adrian A; Haas, Kateland; Mallory, Florence; Froman, Carmen; Lu, Zheng Feng
2017-02-01
Reject rate analysis has been part of radiography departments' quality control since the days of screen-film radiography. In the era of digital radiography, one might expect that reject rate analysis is easily facilitated because of readily available information produced by the modality during the examination procedure. Unfortunately, this is not always the case. The lack of an industry standard and the wide variety of system log entries and formats have made it difficult to implement a robust multivendor reject analysis program, and logs do not always include all relevant information. The increased use of digital detectors exacerbates this problem because of higher reject rates associated with digital radiography compared with computed radiography. In this article, the authors report on the development of a unified database for vendor-neutral reject analysis across multiple sites within an academic institution and share their experience from a team-based approach to reduce reject rates. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
HCP: A Flexible CNN Framework for Multi-label Image Classification.
Wei, Yunchao; Xia, Wei; Lin, Min; Huang, Junshi; Ni, Bingbing; Dong, Jian; Zhao, Yao; Yan, Shuicheng
2015-10-26
Convolutional Neural Network (CNN) has demonstrated promising performance in single-label image classification tasks. However, how CNN best copes with multi-label images still remains an open problem, mainly due to the complex underlying object layouts and insufficient multi-label training images. In this work, we propose a flexible deep CNN infrastructure, called Hypotheses-CNN-Pooling (HCP), where an arbitrary number of object segment hypotheses are taken as the inputs, then a shared CNN is connected with each hypothesis, and finally the CNN output results from different hypotheses are aggregated with max pooling to produce the ultimate multi-label predictions. Some unique characteristics of this flexible deep CNN infrastructure include: 1) no ground-truth bounding box information is required for training; 2) the whole HCP infrastructure is robust to possibly noisy and/or redundant hypotheses; 3) the shared CNN is flexible and can be well pre-trained with a large-scale single-label image dataset, e.g., ImageNet; and 4) it may naturally output multi-label prediction results. Experimental results on Pascal VOC 2007 and VOC 2012 multi-label image datasets well demonstrate the superiority of the proposed HCP infrastructure over other state-of-the-arts. In particular, the mAP reaches 90.5% by HCP only and 93.2% after the fusion with our complementary result in [44] based on hand-crafted features on the VOC 2012 dataset.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zoberi, J.
Brachytherapy has proven to be an effective treatment option for prostate cancer. Initially, prostate brachytherapy was delivered through permanently implanted low dose rate (LDR) radioactive sources; however, high dose rate (HDR) temporary brachytherapy for prostate cancer is gaining popularity. Needle insertion during prostate brachytherapy is most commonly performed under ultrasound (U/S) guidance; however, treatment planning may be performed utilizing several imaging modalities either in an intra- or post-operative setting. During intra-operative prostate HDR, the needles are imaged during implantation, and planning may be performed in real time. At present, the most common imaging modality utilized for intra-operative prostate HDR ismore » U/S. Alternatively, in the post-operative setting, following needle implantation, patients may be simulated with computed tomography (CT) or magnetic resonance imaging (MRI). Each imaging modality and workflow provides its share of benefits and limitations. Prostate HDR has been adopted in a number of cancer centers across the nation. In this educational session, we will explore the role of U/S, CT, and MRI in HDR prostate brachytherapy. Example workflows and operational details will be shared, and we will discuss how to establish a prostate HDR program in a clinical setting. Learning Objectives: Review prostate HDR techniques based on the imaging modality Discuss the challenges and pitfalls introduced by the three imagebased options for prostate HDR brachytherapy Review the QA process and learn about the development of clinical workflows for these imaging options at different institutions.« less
European health telematics networks for positron emission tomography
NASA Astrophysics Data System (ADS)
Kontaxakis, George; Pozo, Miguel Angel; Ohl, Roland; Visvikis, Dimitris; Sachpazidis, Ilias; Ortega, Fernando; Guerra, Pedro; Cheze-Le Rest, Catherine; Selby, Peter; Pan, Leyun; Diaz, Javier; Dimitrakopoulou-Strauss, Antonia; Santos, Andres; Strauss, Ludwig; Sakas, Georgios
2006-12-01
A pilot network of positron emission tomography centers across Europe has been setup employing telemedicine services. The primary aim is to bring all PET centers in Europe (and beyond) closer, by integrating advanced medical imaging technology and health telematics networks applications into a single, easy to operate health telematics platform, which allows secure transmission of medical data via a variety of telecommunications channels and fosters the cooperation between professionals in the field. The platform runs on PCs with Windows 2000/XP and incorporates advanced techniques for image visualization, analysis and fusion. The communication between two connected workstations is based on a TCP/IP connection secured by secure socket layers and virtual private network or jabber protocols. A teleconsultation can be online (with both physicians physically present) or offline (via transmission of messages which contain image data and other information). An interface sharing protocol enables online teleconsultations even over low bandwidth connections. This initiative promotes the cooperation and improved communication between nuclear medicine professionals, offering options for second opinion and training. It permits physicians to remotely consult patient data, even if they are away from the physical examination site.
Häupl, T; Skapenko, A; Hoppe, B; Skriner, K; Burkhardt, H; Poddubnyy, D; Ohrndorf, S; Sewerin, P; Mansmann, U; Stuhlmüller, B; Schulze-Koops, H; Burmester, G-R
2018-05-01
Rheumatic diseases are among the most common chronic inflammatory disorders. Besides severe pain and progressive destruction of the joints, rheumatoid arthritis (RA), spondyloarthritides (SpA) and psoriatic arthritis (PsA) impair working ability, reduce quality of life and if treated insufficiently may enhance mortality. With the introduction of biologics to treat these diseases, the demand for biomarkers of early diagnosis and therapeutic stratification has been growing continuously. The main goal of the consortium ArthroMark is to identify new biomarkers and to apply modern imaging technologies for diagnosis, follow-up assessment and stratification of patients with RA, SpA and PsA. With the development of new biomarkers for these diseases, the ArthroMark project contributes to research in chronic diseases of the musculoskeletal system. The cooperation between different national centers will utilize site-specific resources, such as biobanks and clinical studies for sharing and gainful networking of individual core areas in biomarker analysis. Joint data management and harmonization of data assessment as well as best practice characterization of patients with new imaging technologies will optimize quality of marker validation.
Nonparametric Hierarchical Bayesian Model for Functional Brain Parcellation
Lashkari, Danial; Sridharan, Ramesh; Vul, Edward; Hsieh, Po-Jang; Kanwisher, Nancy; Golland, Polina
2011-01-01
We develop a method for unsupervised analysis of functional brain images that learns group-level patterns of functional response. Our algorithm is based on a generative model that comprises two main layers. At the lower level, we express the functional brain response to each stimulus as a binary activation variable. At the next level, we define a prior over the sets of activation variables in all subjects. We use a Hierarchical Dirichlet Process as the prior in order to simultaneously learn the patterns of response that are shared across the group, and to estimate the number of these patterns supported by data. Inference based on this model enables automatic discovery and characterization of salient and consistent patterns in functional signals. We apply our method to data from a study that explores the response of the visual cortex to a collection of images. The discovered profiles of activation correspond to selectivity to a number of image categories such as faces, bodies, and scenes. More generally, our results appear superior to the results of alternative data-driven methods in capturing the category structure in the space of stimuli. PMID:21841977
NV-CMOS HD camera for day/night imaging
NASA Astrophysics Data System (ADS)
Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.
2014-06-01
SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.
Detection and Evaluation of Skin Disorders by One of Photogrammetric Image Analysis Methods
NASA Astrophysics Data System (ADS)
Güçin, M.; Patias, P.; Altan, M. O.
2012-08-01
Abnormalities on skin may vary from simple acne to painful wounds which affect a person's life quality. Detection of these kinds of disorders in early stages, followed by the evaluation of abnormalities is of high importance. At this stage, photogrammetry offers a non-contact solution to this concern by providing geometric highly accurate data. Photogrammetry, which has been used for firstly topographic purposes, in virtue of terrestrial photogrammetry became useful technique in non-topographic applications also (Wolf et al., 2000). Moreover the extension of usage of photogrammetry, in parallel with the development in technology, analogue photographs are replaced with digital images and besides digital image processing techniques, it provides modification of digital images by using filters, registration processes etc. Besides, photogrammetry (using same coordinate system by registration of images) can serve as a tool for the comparison of temporal imaging data. The aim of this study is to examine several digital image processing techniques, in particular the digital filters, which might be useful to determine skin disorders. In our study we examine affordable to purchase, user friendly software which needs neither expertise nor pre-training. Since it is a pre-work for subsequent and deeper studies, Adobe Photoshop 7.0 is used as a present software. In addition to that Adobe Photoshop released a DesAcc plug-ins with CS3 version and provides full compatibility with DICOM (Digital Imaging and Communications in Medicine) and PACS (Picture Archiving and Communications System) that enables doctors to store all medical data together with relevant images and share if necessary.
Introducing keytagging, a novel technique for the protection of medical image-based tests.
Rubio, Óscar J; Alesanco, Álvaro; García, José
2015-08-01
This paper introduces keytagging, a novel technique to protect medical image-based tests by implementing image authentication, integrity control and location of tampered areas, private captioning with role-based access control, traceability and copyright protection. It relies on the association of tags (binary data strings) to stable, semistable or volatile features of the image, whose access keys (called keytags) depend on both the image and the tag content. Unlike watermarking, this technique can associate information to the most stable features of the image without distortion. Thus, this method preserves the clinical content of the image without the need for assessment, prevents eavesdropping and collusion attacks, and obtains a substantial capacity-robustness tradeoff with simple operations. The evaluation of this technique, involving images of different sizes from various acquisition modalities and image modifications that are typical in the medical context, demonstrates that all the aforementioned security measures can be implemented simultaneously and that the algorithm presents good scalability. In addition to this, keytags can be protected with standard Cryptographic Message Syntax and the keytagging process can be easily combined with JPEG2000 compression since both share the same wavelet transform. This reduces the delays for associating keytags and retrieving the corresponding tags to implement the aforementioned measures to only ≃30 and ≃90ms respectively. As a result, keytags can be seamlessly integrated within DICOM, reducing delays and bandwidth when the image test is updated and shared in secure architectures where different users cooperate, e.g. physicians who interpret the test, clinicians caring for the patient and researchers. Copyright © 2015 Elsevier Inc. All rights reserved.
Design and deployment of a large brain-image database for clinical and nonclinical research
NASA Astrophysics Data System (ADS)
Yang, Guo Liang; Lim, Choie Cheio Tchoyoson; Banukumar, Narayanaswami; Aziz, Aamer; Hui, Francis; Nowinski, Wieslaw L.
2004-04-01
An efficient database is an essential component of organizing diverse information on image metadata and patient information for research in medical imaging. This paper describes the design, development and deployment of a large database system serving as a brain image repository that can be used across different platforms in various medical researches. It forms the infrastructure that links hospitals and institutions together and shares data among them. The database contains patient-, pathology-, image-, research- and management-specific data. The functionalities of the database system include image uploading, storage, indexing, downloading and sharing as well as database querying and management with security and data anonymization concerns well taken care of. The structure of database is multi-tier client-server architecture with Relational Database Management System, Security Layer, Application Layer and User Interface. Image source adapter has been developed to handle most of the popular image formats. The database has a user interface based on web browsers and is easy to handle. We have used Java programming language for its platform independency and vast function libraries. The brain image database can sort data according to clinically relevant information. This can be effectively used in research from the clinicians" points of view. The database is suitable for validation of algorithms on large population of cases. Medical images for processing could be identified and organized based on information in image metadata. Clinical research in various pathologies can thus be performed with greater efficiency and large image repositories can be managed more effectively. The prototype of the system has been installed in a few hospitals and is working to the satisfaction of the clinicians.
Data sharing for public health research: A qualitative study of industry and academia.
Saunders, Pamela A; Wilhelm, Erin E; Lee, Sinae; Merkhofer, Elizabeth; Shoulson, Ira
2014-01-01
Data sharing is a key biomedical research theme for the 21st century. Biomedical data sharing is the exchange of data among (non)affiliated parties under mutually agreeable terms to promote scientific advancement and the development of safe and effective medical products. Wide sharing of research data is important for scientific discovery, medical product development, and public health. Data sharing enables improvements in development of medical products, more attention to rare diseases, and cost-efficiencies in biomedical research. We interviewed 11 participants about their attitudes and beliefs about data sharing. Using a qualitative, thematic analysis approach, our analysis revealed a number of themes including: experiences, approaches, perceived challenges, and opportunities for sharing data.
Fernandez, Nicolas F.; Gundersen, Gregory W.; Rahman, Adeeb; Grimes, Mark L.; Rikova, Klarisa; Hornbeck, Peter; Ma’ayan, Avi
2017-01-01
Most tools developed to visualize hierarchically clustered heatmaps generate static images. Clustergrammer is a web-based visualization tool with interactive features such as: zooming, panning, filtering, reordering, sharing, performing enrichment analysis, and providing dynamic gene annotations. Clustergrammer can be used to generate shareable interactive visualizations by uploading a data table to a web-site, or by embedding Clustergrammer in Jupyter Notebooks. The Clustergrammer core libraries can also be used as a toolkit by developers to generate visualizations within their own applications. Clustergrammer is demonstrated using gene expression data from the cancer cell line encyclopedia (CCLE), original post-translational modification data collected from lung cancer cells lines by a mass spectrometry approach, and original cytometry by time of flight (CyTOF) single-cell proteomics data from blood. Clustergrammer enables producing interactive web based visualizations for the analysis of diverse biological data. PMID:28994825
NASA Astrophysics Data System (ADS)
Pariser, O.; Calef, F.; Manning, E. M.; Ardulov, V.
2017-12-01
We will present implementation and study of several use-cases of utilizing Virtual Reality (VR) for immersive display, interaction and analysis of large and complex 3D datasets. These datasets have been acquired by the instruments across several Earth, Planetary and Solar Space Robotics Missions. First, we will describe the architecture of the common application framework that was developed to input data, interface with VR display devices and program input controllers in various computing environments. Tethered and portable VR technologies will be contrasted and advantages of each highlighted. We'll proceed to presenting experimental immersive analytics visual constructs that enable augmentation of 3D datasets with 2D ones such as images and statistical and abstract data. We will conclude by presenting comparative analysis with traditional visualization applications and share the feedback provided by our users: scientists and engineers.
ERIC Educational Resources Information Center
CAUSE, Boulder, CO.
The 1993 CAUSE Conference presented eight papers on the use of information technology to support the mission of colleges and universities. Papers include: (1) "Institutional Imaging: Sharing the Campus Image" (Carl Jacobson), which describes the University of Delaware's campus-wide information system; (2) "Electronic Paper…
University of Maryland MRSEC - Facilities: SEM/STM/AFM
MRSEC Templates Opportunities Search Home » Facilities » SEM/STM/AFM Shared Experimental Facilities conducting and non conducting samples. The sample stage permits electronic device imaging under operational Specifications: Image Modes - STM, STS, MFM, EFM, SKPM, contact- and non-contact AFM Three Sample Contacts 0.1 nm
Cache write generate for parallel image processing on shared memory architectures.
Wittenbrink, C M; Somani, A K; Chen, C H
1996-01-01
We investigate cache write generate, our cache mode invention. We demonstrate that for parallel image processing applications, the new mode improves main memory bandwidth, CPU efficiency, cache hits, and cache latency. We use register level simulations validated by the UW-Proteus system. Many memory, cache, and processor configurations are evaluated.
MO-B-BRC-00: Prostate HDR Treatment Planning - Considering Different Imaging Modalities
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2016-06-15
Brachytherapy has proven to be an effective treatment option for prostate cancer. Initially, prostate brachytherapy was delivered through permanently implanted low dose rate (LDR) radioactive sources; however, high dose rate (HDR) temporary brachytherapy for prostate cancer is gaining popularity. Needle insertion during prostate brachytherapy is most commonly performed under ultrasound (U/S) guidance; however, treatment planning may be performed utilizing several imaging modalities either in an intra- or post-operative setting. During intra-operative prostate HDR, the needles are imaged during implantation, and planning may be performed in real time. At present, the most common imaging modality utilized for intra-operative prostate HDR ismore » U/S. Alternatively, in the post-operative setting, following needle implantation, patients may be simulated with computed tomography (CT) or magnetic resonance imaging (MRI). Each imaging modality and workflow provides its share of benefits and limitations. Prostate HDR has been adopted in a number of cancer centers across the nation. In this educational session, we will explore the role of U/S, CT, and MRI in HDR prostate brachytherapy. Example workflows and operational details will be shared, and we will discuss how to establish a prostate HDR program in a clinical setting. Learning Objectives: Review prostate HDR techniques based on the imaging modality Discuss the challenges and pitfalls introduced by the three imagebased options for prostate HDR brachytherapy Review the QA process and learn about the development of clinical workflows for these imaging options at different institutions.« less
A midas plugin to enable construction of reproducible web-based image processing pipelines
Grauer, Michael; Reynolds, Patrick; Hoogstoel, Marion; Budin, Francois; Styner, Martin A.; Oguz, Ipek
2013-01-01
Image processing is an important quantitative technique for neuroscience researchers, but difficult for those who lack experience in the field. In this paper we present a web-based platform that allows an expert to create a brain image processing pipeline, enabling execution of that pipeline even by those biomedical researchers with limited image processing knowledge. These tools are implemented as a plugin for Midas, an open-source toolkit for creating web based scientific data storage and processing platforms. Using this plugin, an image processing expert can construct a pipeline, create a web-based User Interface, manage jobs, and visualize intermediate results. Pipelines are executed on a grid computing platform using BatchMake and HTCondor. This represents a new capability for biomedical researchers and offers an innovative platform for scientific collaboration. Current tools work well, but can be inaccessible for those lacking image processing expertise. Using this plugin, researchers in collaboration with image processing experts can create workflows with reasonable default settings and streamlined user interfaces, and data can be processed easily from a lab environment without the need for a powerful desktop computer. This platform allows simplified troubleshooting, centralized maintenance, and easy data sharing with collaborators. These capabilities enable reproducible science by sharing datasets and processing pipelines between collaborators. In this paper, we present a description of this innovative Midas plugin, along with results obtained from building and executing several ITK based image processing workflows for diffusion weighted MRI (DW MRI) of rodent brain images, as well as recommendations for building automated image processing pipelines. Although the particular image processing pipelines developed were focused on rodent brain MRI, the presented plugin can be used to support any executable or script-based pipeline. PMID:24416016
A midas plugin to enable construction of reproducible web-based image processing pipelines.
Grauer, Michael; Reynolds, Patrick; Hoogstoel, Marion; Budin, Francois; Styner, Martin A; Oguz, Ipek
2013-01-01
Image processing is an important quantitative technique for neuroscience researchers, but difficult for those who lack experience in the field. In this paper we present a web-based platform that allows an expert to create a brain image processing pipeline, enabling execution of that pipeline even by those biomedical researchers with limited image processing knowledge. These tools are implemented as a plugin for Midas, an open-source toolkit for creating web based scientific data storage and processing platforms. Using this plugin, an image processing expert can construct a pipeline, create a web-based User Interface, manage jobs, and visualize intermediate results. Pipelines are executed on a grid computing platform using BatchMake and HTCondor. This represents a new capability for biomedical researchers and offers an innovative platform for scientific collaboration. Current tools work well, but can be inaccessible for those lacking image processing expertise. Using this plugin, researchers in collaboration with image processing experts can create workflows with reasonable default settings and streamlined user interfaces, and data can be processed easily from a lab environment without the need for a powerful desktop computer. This platform allows simplified troubleshooting, centralized maintenance, and easy data sharing with collaborators. These capabilities enable reproducible science by sharing datasets and processing pipelines between collaborators. In this paper, we present a description of this innovative Midas plugin, along with results obtained from building and executing several ITK based image processing workflows for diffusion weighted MRI (DW MRI) of rodent brain images, as well as recommendations for building automated image processing pipelines. Although the particular image processing pipelines developed were focused on rodent brain MRI, the presented plugin can be used to support any executable or script-based pipeline.
Telemedicine optoelectronic biomedical data processing system
NASA Astrophysics Data System (ADS)
Prosolovska, Vita V.
2010-08-01
The telemedicine optoelectronic biomedical data processing system is created to share medical information for the control of health rights and timely and rapid response to crisis. The system includes the main blocks: bioprocessor, analog-digital converter biomedical images, optoelectronic module for image processing, optoelectronic module for parallel recording and storage of biomedical imaging and matrix screen display of biomedical images. Rated temporal characteristics of the blocks defined by a particular triggering optoelectronic couple in analog-digital converters and time imaging for matrix screen. The element base for hardware implementation of the developed matrix screen is integrated optoelectronic couples produced by selective epitaxy.
Pienaar, Rudolph; Rannou, Nicolas; Bernal, Jorge; Hahn, Daniel; Grant, P Ellen
2015-01-01
The utility of web browsers for general purpose computing, long anticipated, is only now coming into fruition. In this paper we present a web-based medical image data and information management software platform called ChRIS ([Boston] Children's Research Integration System). ChRIS' deep functionality allows for easy retrieval of medical image data from resources typically found in hospitals, organizes and presents information in a modern feed-like interface, provides access to a growing library of plugins that process these data - typically on a connected High Performance Compute Cluster, allows for easy data sharing between users and instances of ChRIS and provides powerful 3D visualization and real time collaboration.
A high-performance spatial database based approach for pathology imaging algorithm evaluation
Wang, Fusheng; Kong, Jun; Gao, Jingjing; Cooper, Lee A.D.; Kurc, Tahsin; Zhou, Zhengwen; Adler, David; Vergara-Niedermayr, Cristobal; Katigbak, Bryan; Brat, Daniel J.; Saltz, Joel H.
2013-01-01
Background: Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. Context: The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS) data model. Aims: (1) Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2) Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3) Develop a set of queries to support data sampling and result comparisons; (4) Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. Materials and Methods: We have considered two scenarios for algorithm evaluation: (1) algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2) algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The validated data were formatted based on the PAIS data model and loaded into a spatial database. To support efficient data loading, we have implemented a parallel data loading tool that takes advantage of multi-core CPUs to accelerate data injection. The spatial database manages both geometric shapes and image features or classifications, and enables spatial sampling, result comparison, and result aggregation through expressive structured query language (SQL) queries with spatial extensions. To provide scalable and efficient query support, we have employed a shared nothing parallel database architecture, which distributes data homogenously across multiple database partitions to take advantage of parallel computation power and implements spatial indexing to achieve high I/O throughput. Results: Our work proposes a high performance, parallel spatial database platform for algorithm validation and comparison. This platform was evaluated by storing, managing, and comparing analysis results from a set of brain tumor whole slide images. The tools we develop are open source and available to download. Conclusions: Pathology image algorithm validation and comparison are essential to iterative algorithm development and refinement. One critical component is the support for queries involving spatial predicates and comparisons. In our work, we develop an efficient data model and parallel database approach to model, normalize, manage and query large volumes of analytical image result data. Our experiments demonstrate that the data partitioning strategy and the grid-based indexing result in good data distribution across database nodes and reduce I/O overhead in spatial join queries through parallel retrieval of relevant data and quick subsetting of datasets. The set of tools in the framework provide a full pipeline to normalize, load, manage and query analytical results for algorithm evaluation. PMID:23599905
Fast Low-Rank Shared Dictionary Learning for Image Classification.
Tiep Huu Vu; Monga, Vishal
2017-11-01
Despite the fact that different objects possess distinct class-specific features, they also usually share common patterns. This observation has been exploited partially in a recently proposed dictionary learning framework by separating the particularity and the commonality (COPAR). Inspired by this, we propose a novel method to explicitly and simultaneously learn a set of common patterns as well as class-specific features for classification with more intuitive constraints. Our dictionary learning framework is hence characterized by both a shared dictionary and particular (class-specific) dictionaries. For the shared dictionary, we enforce a low-rank constraint, i.e., claim that its spanning subspace should have low dimension and the coefficients corresponding to this dictionary should be similar. For the particular dictionaries, we impose on them the well-known constraints stated in the Fisher discrimination dictionary learning (FDDL). Furthermore, we develop new fast and accurate algorithms to solve the subproblems in the learning step, accelerating its convergence. The said algorithms could also be applied to FDDL and its extensions. The efficiencies of these algorithms are theoretically and experimentally verified by comparing their complexities and running time with those of other well-known dictionary learning methods. Experimental results on widely used image data sets establish the advantages of our method over the state-of-the-art dictionary learning methods.
Fast Low-Rank Shared Dictionary Learning for Image Classification
NASA Astrophysics Data System (ADS)
Vu, Tiep Huu; Monga, Vishal
2017-11-01
Despite the fact that different objects possess distinct class-specific features, they also usually share common patterns. This observation has been exploited partially in a recently proposed dictionary learning framework by separating the particularity and the commonality (COPAR). Inspired by this, we propose a novel method to explicitly and simultaneously learn a set of common patterns as well as class-specific features for classification with more intuitive constraints. Our dictionary learning framework is hence characterized by both a shared dictionary and particular (class-specific) dictionaries. For the shared dictionary, we enforce a low-rank constraint, i.e. claim that its spanning subspace should have low dimension and the coefficients corresponding to this dictionary should be similar. For the particular dictionaries, we impose on them the well-known constraints stated in the Fisher discrimination dictionary learning (FDDL). Further, we develop new fast and accurate algorithms to solve the subproblems in the learning step, accelerating its convergence. The said algorithms could also be applied to FDDL and its extensions. The efficiencies of these algorithms are theoretically and experimentally verified by comparing their complexities and running time with those of other well-known dictionary learning methods. Experimental results on widely used image datasets establish the advantages of our method over state-of-the-art dictionary learning methods.
Using image mapping towards biomedical and biological data sharing
2013-01-01
Image-based data integration in eHealth and life sciences is typically concerned with the method used for anatomical space mapping, needed to retrieve, compare and analyse large volumes of biomedical data. In mapping one image onto another image, a mechanism is used to match and find the corresponding spatial regions which have the same meaning between the source and the matching image. Image-based data integration is useful for integrating data of various information structures. Here we discuss a broad range of issues related to data integration of various information structures, review exemplary work on image representation and mapping, and discuss the challenges that these techniques may bring. PMID:24059352
Bao, Shunxing; Damon, Stephen M; Landman, Bennett A; Gokhale, Aniruddha
2016-02-27
Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.
Endodontic radiography: who is reading the digital radiograph?
Tewary, Shalini; Luzzo, Joseph; Hartwell, Gary
2011-07-01
Digital radiographic imaging systems have undergone tremendous improvements since their introduction. Advantages of digital radiographs over conventional films include lower radiation doses compared with conventional films, instantaneous images, archiving and sharing images easily, and manipulation of several radiographic properties that might help in diagnosis. A total of 6 observers including 2 endodontic residents, 3 endodontists, and 1 oral radiologist evaluated 150 molar digital periapical radiographs to determine which of the following conditions existed: normal periapical tissue, widened periodontal ligament, or presence of periapical radiolucency. The evaluators had full control over the radiograph's parameters of the Planmeca Dimaxis software program. All images were viewed on the same computer monitor with ideal vie-wing conditions. The same 6 observers evaluated the same 150 digital images 3 months later. The data were analyzed to determine how well the evaluators agreed with each other (interobserver agreement) for 2 rounds of observations and with themselves (intraobserver agreement). Fleiss kappa statistical analysis was used to measure the level of agreement among multiple raters. The overall Fleiss kappa value for interobserver agreement for the first round of interpretation was 0.34 (P < .001). The overall Fleiss kappa value for interobserver agreement for the second round of interpretation was 0.35 (P < .001). This resulted in fair (0.2-0.4) agreement among the 6 raters at both observation periods. A weighted kappa analysis was used to determine intraobserver agreement, which showed on average a moderate agreement. The results indicate that the interpretation of a dental radiograph is subjective, irrespective of whether conventional or digital radiographs are used. The factors that appeared to have the most impact were the years of experience of the examiner and familiarity of the operator with a given digital system. Copyright © 2011 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha
2016-03-01
Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical- Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for- use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.
Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha
2016-01-01
Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline. PMID:27127335
NASA Astrophysics Data System (ADS)
Nylk, Jonathan; McCluskey, Kaley; Aggarwal, Sanya; Tello, Javier A.; Dholakia, Kishan
2017-02-01
Light-sheet microscopy (LSM) has received great interest for fluorescent imaging applications in biomedicine as it facilitates three-dimensional visualisation of large sample volumes with high spatiotemporal resolution whilst minimising irradiation of, and photo-damage to the specimen. Despite these advantages, LSM can only visualize superficial layers of turbid tissues, such as mammalian neural tissue. Propagation-invariant light modes have played a key role in the development of high-resolution LSM techniques as they overcome the natural divergence of a Gaussian beam, enabling uniform and thin light-sheets over large distances. Most notably, Bessel and Airy beam-based light-sheet imaging modalities have been demonstrated. In the single-photon excitation regime and in lightly scattering specimens, Airy-LSM has given competitive performance with advanced Bessel-LSM techniques. Airy and Bessel beams share the property of self-healing, the ability of the beam to regenerate its transverse beam profile after propagation around an obstacle. Bessel-LSM techniques have been shown to increase the penetration-depth of the illumination into turbid specimens but this effect has been understudied in biologically relevant tissues, particularly for Airy beams. It is expected that Airy-LSM will give a similar enhancement over Gaussian-LSM. In this paper, we report on the comparison of Airy-LSM and Gaussian-LSM imaging modalities within cleared and non-cleared mouse brain tissue. In particular, we examine image quality versus tissue depth by quantitative spatial Fourier analysis of neural structures in virally transduced fluorescent tissue sections, showing a three-fold enhancement at 50 μm depth into non-cleared tissue with Airy-LSM. Complimentary analysis is performed by resolution measurements in bead-injected tissue sections.
Prins, Pjotr; Goto, Naohisa; Yates, Andrew; Gautier, Laurent; Willis, Scooter; Fields, Christopher; Katayama, Toshiaki
2012-01-01
Open-source software (OSS) encourages computer programmers to reuse software components written by others. In evolutionary bioinformatics, OSS comes in a broad range of programming languages, including C/C++, Perl, Python, Ruby, Java, and R. To avoid writing the same functionality multiple times for different languages, it is possible to share components by bridging computer languages and Bio* projects, such as BioPerl, Biopython, BioRuby, BioJava, and R/Bioconductor. In this chapter, we compare the two principal approaches for sharing software between different programming languages: either by remote procedure call (RPC) or by sharing a local call stack. RPC provides a language-independent protocol over a network interface; examples are RSOAP and Rserve. The local call stack provides a between-language mapping not over the network interface, but directly in computer memory; examples are R bindings, RPy, and languages sharing the Java Virtual Machine stack. This functionality provides strategies for sharing of software between Bio* projects, which can be exploited more often. Here, we present cross-language examples for sequence translation, and measure throughput of the different options. We compare calling into R through native R, RSOAP, Rserve, and RPy interfaces, with the performance of native BioPerl, Biopython, BioJava, and BioRuby implementations, and with call stack bindings to BioJava and the European Molecular Biology Open Software Suite. In general, call stack approaches outperform native Bio* implementations and these, in turn, outperform RPC-based approaches. To test and compare strategies, we provide a downloadable BioNode image with all examples, tools, and libraries included. The BioNode image can be run on VirtualBox-supported operating systems, including Windows, OSX, and Linux.
Van Valen, David A; Kudo, Takamasa; Lane, Keara M; Macklin, Derek N; Quach, Nicolas T; DeFelice, Mialy M; Maayan, Inbal; Tanouchi, Yu; Ashley, Euan A; Covert, Markus W
2016-11-01
Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domains of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems.
Van Valen, David A.; Kudo, Takamasa; Lane, Keara M.; ...
2016-11-04
Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domainsmore » of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Valen, David A.; Kudo, Takamasa; Lane, Keara M.
Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domainsmore » of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems.« less
Van Valen, David A.; Lane, Keara M.; Quach, Nicolas T.; Maayan, Inbal
2016-01-01
Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domains of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems. PMID:27814364
... have a personal story about radiology? Share your patient story here Pediatric Content Some imaging tests and treatments have special pediatric considerations. The teddy bear denotes child-specific content. Related ...
... page: //medlineplus.gov/ency/article/007646.htm Endoscopic ultrasound To use the sharing features on this page, please enable JavaScript. Endoscopic ultrasound is a type of imaging test. It is ...
Woodman, N.; Morgan, J.P.J.
2005-01-01
Variation in the forefoot skeleton of small-eared shrews (family Soricidae, genus Cryptotis) has been previously documented, but the paucity of available skeletons for most taxa makes assessment of the degrees of intraspecific and interspecific variation difficult. We used a digital X-ray system to extract images of the forefoot skeleton from 101 dried skins of eight taxa (seven species, including two subspecies of one species) of these shrews. Lengths and widths of each of the four bones of digit III were measured directly from the digital images, and we used these data to quantify variation within and among taxa. Analysis of the images and measurements showed that interspecific variation exceeds intraspecific variation. In fact, most taxa could be distinguished in multivariate and some bivariate plots. Our quantitative data helped us define a number of specific forefoot characters that we subsequently used to hypothesize evolutionary relationships among the taxa using the exhaustive search option in PAUP, a computer program for phylogenetic analysis. The resulting trees generally concur with previously published evolutionary hypotheses for small-eared shrews. Cryptotis meridensis, a taxon not previously examined in recent phylogenies, is rooted at the base of the branch leading to the C. mexicana group of species. The position of this species suggests that the mostly South American C. thomasi group shares an early ancestor with the C. mexicana group.
Jupiter Observation Campaign - Citizen Science At The Outer Planets: A Progress Report
NASA Astrophysics Data System (ADS)
Houston Jones, J.; Dyches, P.
2012-12-01
Amateur astronomers and astrophotographers diligently image Mars, Saturn and Jupiter in amazing detail. They often capture first views of storms on Saturn, impacts on Jupiter and changes in the planet's atmospheres. Many of the worldwide cadre of imagers share their images with each other and with planetary scientists. This new Jupiter focused citizen science program seeks to collect images and sort them into categories useful to scientists. In doing so, it provides a larger population of amateur astronomers with the opportunity to contribute their observations to NASA's JUNO Mission.
ERIC Educational Resources Information Center
Tao, Ping-Kee
2004-01-01
This article reports the use of a computer-based collaborative learning instruction designed to help students develop understanding of image formation by lenses. The study aims to investigate how students, working in dyads and mediated by multimedia computer-assisted learning (CAL) programs, construct shared knowledge and understanding. The…
The Investigation on Brand Image of University Education and Students' Word-of-Mouth Behavior
ERIC Educational Resources Information Center
Chen, Chin-Tsu
2016-01-01
This study aimed to find how the brand image and satisfaction of universities influence university students' word-of-mouth behavior, including the sharing of satisfying experiences and recommendations to others. This study conducted a questionnaire survey and distributed 400 questionnaires to students and graduates of universities in Taiwan; 336…
Cyberinfrastructure for Open Science at the Montreal Neurological Institute
Das, Samir; Glatard, Tristan; Rogers, Christine; Saigle, John; Paiva, Santiago; MacIntyre, Leigh; Safi-Harab, Mouna; Rousseau, Marc-Etienne; Stirling, Jordan; Khalili-Mahani, Najmeh; MacFarlane, David; Kostopoulos, Penelope; Rioux, Pierre; Madjar, Cecile; Lecours-Boucher, Xavier; Vanamala, Sandeep; Adalat, Reza; Mohaddes, Zia; Fonov, Vladimir S.; Milot, Sylvain; Leppert, Ilana; Degroot, Clotilde; Durcan, Thomas M.; Campbell, Tara; Moreau, Jeremy; Dagher, Alain; Collins, D. Louis; Karamchandani, Jason; Bar-Or, Amit; Fon, Edward A.; Hoge, Rick; Baillet, Sylvain; Rouleau, Guy; Evans, Alan C.
2017-01-01
Data sharing is becoming more of a requirement as technologies mature and as global research and communications diversify. As a result, researchers are looking for practical solutions, not only to enhance scientific collaborations, but also to acquire larger amounts of data, and to access specialized datasets. In many cases, the realities of data acquisition present a significant burden, therefore gaining access to public datasets allows for more robust analyses and broadly enriched data exploration. To answer this demand, the Montreal Neurological Institute has announced its commitment to Open Science, harnessing the power of making both clinical and research data available to the world (Owens, 2016a,b). As such, the LORIS and CBRAIN (Das et al., 2016) platforms have been tasked with the technical challenges specific to the institutional-level implementation of open data sharing, including: Comprehensive linking of multimodal data (phenotypic, clinical, neuroimaging, biobanking, and genomics, etc.)Secure database encryption, specifically designed for institutional and multi-project data sharing, ensuring subject confidentiality (using multi-tiered identifiers).Querying capabilities with multiple levels of single study and institutional permissions, allowing public data sharing for all consented and de-identified subject data.Configurable pipelines and flags to facilitate acquisition and analysis, as well as access to High Performance Computing clusters for rapid data processing and sharing of software tools.Robust Workflows and Quality Control mechanisms ensuring transparency and consistency in best practices.Long term storage (and web access) of data, reducing loss of institutional data assets.Enhanced web-based visualization of imaging, genomic, and phenotypic data, allowing for real-time viewing and manipulation of data from anywhere in the world.Numerous modules for data filtering, summary statistics, and personalized and configurable dashboards. Implementing the vision of Open Science at the Montreal Neurological Institute will be a concerted undertaking that seeks to facilitate data sharing for the global research community. Our goal is to utilize the years of experience in multi-site collaborative research infrastructure to implement the technical requirements to achieve this level of public data sharing in a practical yet robust manner, in support of accelerating scientific discovery. PMID:28111547
Cyberinfrastructure for Open Science at the Montreal Neurological Institute.
Das, Samir; Glatard, Tristan; Rogers, Christine; Saigle, John; Paiva, Santiago; MacIntyre, Leigh; Safi-Harab, Mouna; Rousseau, Marc-Etienne; Stirling, Jordan; Khalili-Mahani, Najmeh; MacFarlane, David; Kostopoulos, Penelope; Rioux, Pierre; Madjar, Cecile; Lecours-Boucher, Xavier; Vanamala, Sandeep; Adalat, Reza; Mohaddes, Zia; Fonov, Vladimir S; Milot, Sylvain; Leppert, Ilana; Degroot, Clotilde; Durcan, Thomas M; Campbell, Tara; Moreau, Jeremy; Dagher, Alain; Collins, D Louis; Karamchandani, Jason; Bar-Or, Amit; Fon, Edward A; Hoge, Rick; Baillet, Sylvain; Rouleau, Guy; Evans, Alan C
2016-01-01
Data sharing is becoming more of a requirement as technologies mature and as global research and communications diversify. As a result, researchers are looking for practical solutions, not only to enhance scientific collaborations, but also to acquire larger amounts of data, and to access specialized datasets. In many cases, the realities of data acquisition present a significant burden, therefore gaining access to public datasets allows for more robust analyses and broadly enriched data exploration. To answer this demand, the Montreal Neurological Institute has announced its commitment to Open Science, harnessing the power of making both clinical and research data available to the world (Owens, 2016a,b). As such, the LORIS and CBRAIN (Das et al., 2016) platforms have been tasked with the technical challenges specific to the institutional-level implementation of open data sharing, including: Comprehensive linking of multimodal data (phenotypic, clinical, neuroimaging, biobanking, and genomics, etc.)Secure database encryption, specifically designed for institutional and multi-project data sharing, ensuring subject confidentiality (using multi-tiered identifiers).Querying capabilities with multiple levels of single study and institutional permissions, allowing public data sharing for all consented and de-identified subject data.Configurable pipelines and flags to facilitate acquisition and analysis, as well as access to High Performance Computing clusters for rapid data processing and sharing of software tools.Robust Workflows and Quality Control mechanisms ensuring transparency and consistency in best practices.Long term storage (and web access) of data, reducing loss of institutional data assets.Enhanced web-based visualization of imaging, genomic, and phenotypic data, allowing for real-time viewing and manipulation of data from anywhere in the world.Numerous modules for data filtering, summary statistics, and personalized and configurable dashboards. Implementing the vision of Open Science at the Montreal Neurological Institute will be a concerted undertaking that seeks to facilitate data sharing for the global research community. Our goal is to utilize the years of experience in multi-site collaborative research infrastructure to implement the technical requirements to achieve this level of public data sharing in a practical yet robust manner, in support of accelerating scientific discovery.
Kuchinke, Wolfgang; Krauth, Christian; Bergmann, René; Karakoyun, Töresin; Woollard, Astrid; Schluender, Irene; Braasch, Benjamin; Eckert, Martin; Ohmann, Christian
2016-07-07
In an unprecedented rate data in the life sciences is generated and stored in many different databases. An ever increasing part of this data is human health data and therefore falls under data protected by legal regulations. As part of the BioMedBridges project, which created infrastructures that connect more than 10 ESFRI research infrastructures (RI), the legal and ethical prerequisites of data sharing were examined employing a novel and pragmatic approach. We employed concepts from computer science to create legal requirement clusters that enable legal interoperability between databases for the areas of data protection, data security, Intellectual Property (IP) and security of biosample data. We analysed and extracted access rules and constraints from all data providers (databases) involved in the building of data bridges covering many of Europe's most important databases. These requirement clusters were applied to five usage scenarios representing the data flow in different data bridges: Image bridge, Phenotype data bridge, Personalised medicine data bridge, Structural data bridge, and Biosample data bridge. A matrix was built to relate the important concepts from data protection regulations (e.g. pseudonymisation, identifyability, access control, consent management) with the results of the requirement clusters. An interactive user interface for querying the matrix for requirements necessary for compliant data sharing was created. To guide researchers without the need for legal expert knowledge through legal requirements, an interactive tool, the Legal Assessment Tool (LAT), was developed. LAT provides researchers interactively with a selection process to characterise the involved types of data and databases and provides suitable requirements and recommendations for concrete data access and sharing situations. The results provided by LAT are based on an analysis of the data access and sharing conditions for different kinds of data of major databases in Europe. Data sharing for research purposes must be opened for human health data and LAT is one of the means to achieve this aim. In summary, LAT provides requirements in an interactive way for compliant data access and sharing with appropriate safeguards, restrictions and responsibilities by introducing a culture of responsibility and data governance when dealing with human data.
Methodology for fast detection of false sharing in threaded scientific codes
Chung, I-Hsin; Cong, Guojing; Murata, Hiroki; Negishi, Yasushi; Wen, Hui-Fang
2014-11-25
A profiling tool identifies a code region with a false sharing potential. A static analysis tool classifies variables and arrays in the identified code region. A mapping detection library correlates memory access instructions in the identified code region with variables and arrays in the identified code region while a processor is running the identified code region. The mapping detection library identifies one or more instructions at risk, in the identified code region, which are subject to an analysis by a false sharing detection library. A false sharing detection library performs a run-time analysis of the one or more instructions at risk while the processor is re-running the identified code region. The false sharing detection library determines, based on the performed run-time analysis, whether two different portions of the cache memory line are accessed by the generated binary code.
Schoeffield, Andrew J.; Falkler, William A.; Desai, Darshana; Williams, Henry N.
1991-01-01
Little has been reported on the serological relationship of halophilic bdellovibrios (Bd). Immunodiffusion analysis performed with rabbit or mouse Bd antisera developed against eight halophilic Bd isolates and one terrestrial Bd isolate, when reacted with soluble antigen preparations of 45 isolates of halophilic Bd, allowed separation into seven serogroups, which were distinct from the terrestrial isolate. Soluble antigen preparations of prey bacteria, Vibrio parahaemolyticus P-5 (P-5) and Escherichia coli ML 35 (ML 35), exhibited no reactivity with the antisera by immunodiffusion. Immunoelectrophoresis revealed the presence of three distinct antigens in homologous reactions and one shared antigen in heterologous Bd reactions. Shared antigens were noted between halophilic and terrestrial Bd, in addition to between halophilic Bd strains, indicating the possible existence of an antigen(s) which may be shared among all Bd. Again, no shared antigen was noted when P-5 or ML 35 was allowed by immunoelectrophoresis to react with the antisera. Prey susceptibility testing of the seven distinct groups of halophilic Bd, using 20 test prey, produced essentially identical spectra for each group, indicating that this was not a useful technique in delineating the Bd. While immunoelectrophoresis was able to demonstrate an antigen common to all Bd tested, immunodiffusion was able to delineate strains on the basis of a “serogroup specific” antigen. This suggests that immunological tools may serve as important means to study the taxonomy of halophilic Bd, as well as in the formation of a clearer taxonomic picture of the genus Bdellovibrio. Images PMID:16348597
Magnetic Resonance (MR) Defecography
... have a personal story about radiology? Share your patient story here Pediatric Content Some imaging tests and treatments have special pediatric considerations. The teddy bear denotes child-specific content. Related ...
Using a high-definition stereoscopic video system to teach microscopic surgery
NASA Astrophysics Data System (ADS)
Ilgner, Justus; Park, Jonas Jae-Hyun; Labbé, Daniel; Westhofen, Martin
2007-02-01
Introduction: While there is an increasing demand for minimally invasive operative techniques in Ear, Nose and Throat surgery, these operations are difficult to learn for junior doctors and demanding to supervise for experienced surgeons. The motivation for this study was to integrate high-definition (HD) stereoscopic video monitoring in microscopic surgery in order to facilitate teaching interaction between senior and junior surgeon. Material and methods: We attached a 1280x1024 HD stereo camera (TrueVisionSystems TM Inc., Santa Barbara, CA, USA) to an operating microscope (Zeiss ProMagis, Zeiss Co., Oberkochen, Germany), whose images were processed online by a PC workstation consisting of a dual Intel® Xeon® CPU (Intel Co., Santa Clara, CA). The live image was displayed by two LCD projectors @ 1280x768 pixels on a 1,25m rear-projection screen by polarized filters. While the junior surgeon performed the surgical procedure based on the displayed stereoscopic image, all other participants (senior surgeon, nurse and medical students) shared the same stereoscopic image from the screen. Results: With the basic setup being performed only once on the day before surgery, fine adjustments required about 10 minutes extra during the operation schedule, which fitted into the time interval between patients and thus did not prolong operation times. As all relevant features of the operative field were demonstrated on one large screen, four major effects were obtained: A) Stereoscopy facilitated orientation for the junior surgeon as well as for medical students. B) The stereoscopic image served as an unequivocal guide for the senior surgeon to demonstrate the next surgical steps to the junior colleague. C) The theatre nurse shared the same image, anticipating the next instruments which were needed. D) Medical students instantly share the information given by all staff and the image, thus avoiding the need for an extra teaching session. Conclusion: High definition stereoscopy bears the potential to compress the learning curve for undergraduate as well as postgraduate medical professionals in minimally invasive surgery. Further studies will focus on the long term effect for operative training as well as on post-processing of HD stereoscopy video content for off-line interactive medical education.
Imaging electron wave functions inside open quantum rings.
Martins, F; Hackens, B; Pala, M G; Ouisse, T; Sellier, H; Wallart, X; Bollaert, S; Cappy, A; Chevrier, J; Bayot, V; Huant, S
2007-09-28
Combining scanning gate microscopy (SGM) experiments and simulations, we demonstrate low temperature imaging of the electron probability density |Psi|(2)(x,y) in embedded mesoscopic quantum rings. The tip-induced conductance modulations share the same temperature dependence as the Aharonov-Bohm effect, indicating that they originate from electron wave function interferences. Simulations of both |Psi|(2)(x,y) and SGM conductance maps reproduce the main experimental observations and link fringes in SGM images to |Psi|(2)(x,y).
Xia, Wei; Wu, Jian; Deng, Fei-Yan; Wu, Long-Fei; Zhang, Yong-Hong; Guo, Yu-Fan; Lei, Shu-Feng
2017-02-01
Rheumatoid arthritis (RA) is a systemic autoimmune disease. So far, it is unclear whether there exist common RA-related genes shared in different tissues/cells. In this study, we conducted an integrative analysis on multiple datasets to identify potential shared genes that are significant in multiple tissues/cells for RA. Seven microarray gene expression datasets representing various RA-related tissues/cells were downloaded from the Gene Expression Omnibus (GEO). Statistical analyses, testing both marginal and joint effects, were conducted to identify significant genes shared in various samples. Followed-up analyses were conducted on functional annotation clustering analysis, protein-protein interaction (PPI) analysis, gene-based association analysis, and ELISA validation analysis in in-house samples. We identified 18 shared significant genes, which were mainly involved in the immune response and chemokine signaling pathway. Among the 18 genes, eight genes (PPBP, PF4, HLA-F, S100A8, RNASEH2A, P2RY6, JAG2, and PCBP1) interact with known RA genes. Two genes (HLA-F and PCBP1) are significant in gene-based association analysis (P = 1.03E-31, P = 1.30E-2, respectively). Additionally, PCBP1 also showed differential protein expression levels in in-house case-control plasma samples (P = 2.60E-2). This study represented the first effort to identify shared RA markers from different functional cells or tissues. The results suggested that one of the shared genes, i.e., PCBP1, is a promising biomarker for RA.
Data and Information Exchange System for the "Reindeer Mapper" Project
NASA Technical Reports Server (NTRS)
Maynard, Nancy; Yurchak, Boris
2005-01-01
During this past year, the Reindeer Mapper Intranet system has been set up on the NASA system, 8 team members have been established, a Reindeer Mapper reference list containing 696 items has been entered, 6 power point presentations have been put on line for review among team members, 304 satellite images have been catalogued (including 16 Landsat images, 288 NDVI 10-day composited images and an anomaly series- May 1998 to December 2002, and 56 SAR CEOS S A R format files), schedules and meeting dates are being shared, students at the Nordic Sami Institute are experimenting with the system for reindeer herder indigenous knowledge sharing, and an "address book" is being developed. Several documents and presentations have been translated and made available in Russian for our Russian colleagues. This has enabled our Russian partners to utilize documents and presentations for use in their research (e.g., SAR imagery comparisons with Russian GIS of specific study areas) and discussion with local colleagues.
Digital dental radiology in Belgium: a nationwide survey.
Snel, Robin; Van De Maele, Ellen; Politis, Constantinus; Jacobs, Reinhilde
2018-06-27
The aim of this study is to analyse the use of digital dental radiology in Belgium, by focussing on the use of extraoral and intraoral radiographic techniques, digitalisation and image communication. A nationwide survey has been performed amongst Belgian general dentists and dental specialists. Questionnaires were distributed digitally via mailings lists and manually on multiple refresher courses and congresses throughout the country. The overall response rate was 30%. Overall, 94% of the respondents had access to an intraoral radiographic unit, 76% had access to a panoramic unit, 21% has an attached cephalometric arm. One in five Belgian dentists also seem to have direct access to a cone beam CT. 90% of all intraoral radiography unit worked with digital detectors, while this was 91% for panoramic units (with or without cephalometrics). In 70% of the cases, general dental practitioners with a digital intraoral unit used a storage phosphor plate while in 30% of the cases they used sensor technology (charge-coupled device or complementary metal-oxide-semiconductor). The most common method for professional image transfer appeared to be email. Finally, 16% of all respondents used a calibrated monitor for image analysis. The survey indicates that 90% of the respondents, Belgian dentists, make use of digital image techniques. For sharing images, general dental practitioners mainly use methods such as printout and e-mail. The usage of calibrated monitors, however, is not well established yet.
Imaging manifestations of autoimmune disease-associated lymphoproliferative disorders of the lung.
Lee, Geewon; Lee, Ho Yun; Lee, Kyung Soo; Lee, Kyung Jong; Cha, Hoon-Suk; Han, Joungho; Chung, Man Pyo
2013-10-01
Lymphoproliferative disorders (LPDs) may involve intrathoracic organs in patients with autoimmune disease, but little is known about the radiologic manifestations of autoimmune disease-associated LPDs (ALPDs) of the lungs. The purpose of our work was to identify the radiologic characteristics of pulmonary involvement in ALPDs. A comprehensive search for PubMed database was conducted with the combination of MeSH words. All articles which had original images or description on radiologic findings were included in this analysis. Also, CT images of eight patients with biopsy-proven lymphoproliferative disorder observed from our institution were added. Overall, 44 cases of ALPD were identified, and consisted of 24 cases of bronchus-associated lymphoid tissue lymphoma (BALToma), eight cases of non-Hodgkin's lymphoma (NHL), six cases of lymphoid interstitial pneumonia (LIP), two cases of nodular lymphoid hyperplasia, two cases of unclassified lymphoproliferative disorder, and one case each of lymphomatoid granulomatosis and hyperblastic BALT. Multiple nodules (n = 14, 32 %) and single mass (n = 8, 18 %) were the predominant radiologic manifestations. The imaging findings conformed to previously described findings of BALToma, NHL, or LIP. Data suggest that BALToma, NHL, and LIP are the predominant ALPDs of the lung, and ALPD generally shared common radiologic features with sporadic LPDs. Familiarity with ALPDs and their imaging findings may enable radiologists or clinicians to include the disease as a potential differential diagnosis and thus, to prompt early biopsy followed by appropriate treatment.
Smartphone adapters for digital photomicrography.
Roy, Somak; Pantanowitz, Liron; Amin, Milon; Seethala, Raja R; Ishtiaque, Ahmed; Yousem, Samuel A; Parwani, Anil V; Cucoranu, Ioan; Hartman, Douglas J
2014-01-01
Photomicrographs in Anatomic Pathology provide a means of quickly sharing information from a glass slide for consultation, education, documentation and publication. While static image acquisition historically involved the use of a permanently mounted camera unit on a microscope, such cameras may be expensive, need to be connected to a computer, and often require proprietary software to acquire and process images. Another novel approach for capturing digital microscopic images is to use smartphones coupled with the eyepiece of a microscope. Recently, several smartphone adapters have emerged that allow users to attach mobile phones to the microscope. The aim of this study was to test the utility of these various smartphone adapters. We surveyed the market for adapters to attach smartphones to the ocular lens of a conventional light microscope. Three adapters (Magnifi, Skylight and Snapzoom) were tested. We assessed the designs of these adapters and their effectiveness at acquiring static microscopic digital images. All adapters facilitated the acquisition of digital microscopic images with a smartphone. The optimal adapter was dependent on the type of phone used. The Magnifi adapters for iPhone were incompatible when using a protective case. The Snapzoom adapter was easiest to use with iPhones and other smartphones even with protective cases. Smartphone adapters are inexpensive and easy to use for acquiring digital microscopic images. However, they require some adjustment by the user in order to optimize focus and obtain good quality images. Smartphone microscope adapters provide an economically feasible method of acquiring and sharing digital pathology photomicrographs.
MO-B-BRC-01: Introduction [Brachytherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prisciandaro, J.
2016-06-15
Brachytherapy has proven to be an effective treatment option for prostate cancer. Initially, prostate brachytherapy was delivered through permanently implanted low dose rate (LDR) radioactive sources; however, high dose rate (HDR) temporary brachytherapy for prostate cancer is gaining popularity. Needle insertion during prostate brachytherapy is most commonly performed under ultrasound (U/S) guidance; however, treatment planning may be performed utilizing several imaging modalities either in an intra- or post-operative setting. During intra-operative prostate HDR, the needles are imaged during implantation, and planning may be performed in real time. At present, the most common imaging modality utilized for intra-operative prostate HDR ismore » U/S. Alternatively, in the post-operative setting, following needle implantation, patients may be simulated with computed tomography (CT) or magnetic resonance imaging (MRI). Each imaging modality and workflow provides its share of benefits and limitations. Prostate HDR has been adopted in a number of cancer centers across the nation. In this educational session, we will explore the role of U/S, CT, and MRI in HDR prostate brachytherapy. Example workflows and operational details will be shared, and we will discuss how to establish a prostate HDR program in a clinical setting. Learning Objectives: Review prostate HDR techniques based on the imaging modality Discuss the challenges and pitfalls introduced by the three imagebased options for prostate HDR brachytherapy Review the QA process and learn about the development of clinical workflows for these imaging options at different institutions.« less
Peer-to-peer architecture for multi-departmental distributed PACS
NASA Astrophysics Data System (ADS)
Rosset, Antoine; Heuberger, Joris; Pysher, Lance; Ratib, Osman
2006-03-01
We have elected to explore peer-to-peer technology as an alternative to centralized PACS architecture for the increasing requirements for wide access to images inside and outside a radiology department. The goal being to allow users across the enterprise to access any study anytime without the need for prefetching or routing of images from central archive. Images can be accessed between different workstations and local storage nodes. We implemented "bonjour" a new remote file access technology developed by Apple allowing applications to share data and files remotely with optimized data access and data transfer. Our Open-source image display platform called OsiriX was adapted to allow sharing of local DICOM images through direct access of each local SQL database to be accessible from any other OsiriX workstation over the network. A server version of Osirix Core Data database also allows to access distributed archives servers in the same way. The infrastructure implemented allows fast and efficient access to any image anywhere anytime independently from the actual physical location of the data. It also allows benefiting from the performance of distributed low-cost and high capacity storage servers that can provide efficient caching of PACS data that was found to be 10 to 20 x faster that accessing the same date from the central PACS archive. It is particularly suitable for large hospitals and academic environments where clinical conferences, interdisciplinary discussions and successive sessions of image processing are often part of complex workflow or patient management and decision making.
MO-B-BRC-04: MRI-Based Prostate HDR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mourtada, F.
2016-06-15
Brachytherapy has proven to be an effective treatment option for prostate cancer. Initially, prostate brachytherapy was delivered through permanently implanted low dose rate (LDR) radioactive sources; however, high dose rate (HDR) temporary brachytherapy for prostate cancer is gaining popularity. Needle insertion during prostate brachytherapy is most commonly performed under ultrasound (U/S) guidance; however, treatment planning may be performed utilizing several imaging modalities either in an intra- or post-operative setting. During intra-operative prostate HDR, the needles are imaged during implantation, and planning may be performed in real time. At present, the most common imaging modality utilized for intra-operative prostate HDR ismore » U/S. Alternatively, in the post-operative setting, following needle implantation, patients may be simulated with computed tomography (CT) or magnetic resonance imaging (MRI). Each imaging modality and workflow provides its share of benefits and limitations. Prostate HDR has been adopted in a number of cancer centers across the nation. In this educational session, we will explore the role of U/S, CT, and MRI in HDR prostate brachytherapy. Example workflows and operational details will be shared, and we will discuss how to establish a prostate HDR program in a clinical setting. Learning Objectives: Review prostate HDR techniques based on the imaging modality Discuss the challenges and pitfalls introduced by the three imagebased options for prostate HDR brachytherapy Review the QA process and learn about the development of clinical workflows for these imaging options at different institutions.« less
MO-B-BRC-02: Ultrasound Based Prostate HDR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Z.
2016-06-15
Brachytherapy has proven to be an effective treatment option for prostate cancer. Initially, prostate brachytherapy was delivered through permanently implanted low dose rate (LDR) radioactive sources; however, high dose rate (HDR) temporary brachytherapy for prostate cancer is gaining popularity. Needle insertion during prostate brachytherapy is most commonly performed under ultrasound (U/S) guidance; however, treatment planning may be performed utilizing several imaging modalities either in an intra- or post-operative setting. During intra-operative prostate HDR, the needles are imaged during implantation, and planning may be performed in real time. At present, the most common imaging modality utilized for intra-operative prostate HDR ismore » U/S. Alternatively, in the post-operative setting, following needle implantation, patients may be simulated with computed tomography (CT) or magnetic resonance imaging (MRI). Each imaging modality and workflow provides its share of benefits and limitations. Prostate HDR has been adopted in a number of cancer centers across the nation. In this educational session, we will explore the role of U/S, CT, and MRI in HDR prostate brachytherapy. Example workflows and operational details will be shared, and we will discuss how to establish a prostate HDR program in a clinical setting. Learning Objectives: Review prostate HDR techniques based on the imaging modality Discuss the challenges and pitfalls introduced by the three imagebased options for prostate HDR brachytherapy Review the QA process and learn about the development of clinical workflows for these imaging options at different institutions.« less
ISMRM Raw data format: A proposed standard for MRI raw datasets.
Inati, Souheil J; Naegele, Joseph D; Zwart, Nicholas R; Roopchansingh, Vinai; Lizak, Martin J; Hansen, David C; Liu, Chia-Ying; Atkinson, David; Kellman, Peter; Kozerke, Sebastian; Xue, Hui; Campbell-Washburn, Adrienne E; Sørensen, Thomas S; Hansen, Michael S
2017-01-01
This work proposes the ISMRM Raw Data format as a common MR raw data format, which promotes algorithm and data sharing. A file format consisting of a flexible header and tagged frames of k-space data was designed. Application Programming Interfaces were implemented in C/C++, MATLAB, and Python. Converters for Bruker, General Electric, Philips, and Siemens proprietary file formats were implemented in C++. Raw data were collected using magnetic resonance imaging scanners from four vendors, converted to ISMRM Raw Data format, and reconstructed using software implemented in three programming languages (C++, MATLAB, Python). Images were obtained by reconstructing the raw data from all vendors. The source code, raw data, and images comprising this work are shared online, serving as an example of an image reconstruction project following a paradigm of reproducible research. The proposed raw data format solves a practical problem for the magnetic resonance imaging community. It may serve as a foundation for reproducible research and collaborations. The ISMRM Raw Data format is a completely open and community-driven format, and the scientific community is invited (including commercial vendors) to participate either as users or developers. Magn Reson Med 77:411-421, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Kashyap, Ravi; Dondi, Maurizio; Paez, Diana; Mariani, Guliano
2013-05-01
The growth in nuclear medicine, in the past decade, is largely due to hybrid imaging, specifically single-photon emission tomography-computed tomography (SPECT-CT) and positron emission tomography-computed tomography (PET-CT). Introduction and use of hybrid imaging has been growing at a fast pace. This has led to many challenges and opportunities to the personnel dealing with it. The International Atomic Energy Agency (IAEA) keeps a close watch on the trends in applications of nuclear techniques in health by many ways, including obtaining inputs from member states and professional societies. In 2012, a Technical Meeting on trends in hybrid imaging was organized by IAEA to understand the current status and trends of hybrid imaging using nuclear techniques, its role in clinical practice, and associated educational needs and challenges. Perspective of scientific societies and professionals from all the regions of the world was obtained. Heterogeneity in value, educational needs, and access was noted and the drivers of this heterogeneity were discussed. This article presents the key points shared during the technical meeting, focusing primarily on SPECT-CT and PET-CT, and shares the action plan for IAEA to deal with heterogeneity as suggested by the participants. Copyright © 2013 Elsevier Inc. All rights reserved.
Development of an LYSO based gamma camera for positron and scinti-mammography
NASA Astrophysics Data System (ADS)
Liang, H.-C.; Jan, M.-L.; Lin, W.-C.; Yu, S.-F.; Su, J.-L.; Shen, L.-H.
2009-08-01
In this research, characteristics of combination of PSPMTs (position sensitive photo-multiplier tube) to form a larger detection area is studied. A home-made linear divider circuit was built for merging signals and readout. Borosilicate glasses were chosen for the scintillation light sharing in the crossover region. Deterioration effect caused by the light guide was understood. The influences of light guide and crossover region on the separable crystal size were evaluated. According to the test results, a gamma camera with a crystal block of 90 × 90 mm2 covered area, composed of 2 mm LYSO crystal pixels, was designed and fabricated. Measured performances showed that this camera worked fine in both 511 keV and lower energy gammas. The light loss behaviour within the crossover region was analyzed and realized. Through count rate measurements, the 176Lu nature background didn't show severe influence on the single photon imaging and exhibited an amount of less than 1/3 of all the events acquired. These results show that with using light sharing techniques, combination of multiple PSPMTs in both X and Y directions to build a large area imaging detector is capable to be achieved. Also this camera design is feasible to keep both the abilities for positron and single photon breast imaging applications. Separable crystal size is 2 mm with 2 mm thick glass applied for the light sharing in current status.
IHE cross-enterprise document sharing for imaging: interoperability testing software
2010-01-01
Background With the deployments of Electronic Health Records (EHR), interoperability testing in healthcare is becoming crucial. EHR enables access to prior diagnostic information in order to assist in health decisions. It is a virtual system that results from the cooperation of several heterogeneous distributed systems. Interoperability between peers is therefore essential. Achieving interoperability requires various types of testing. Implementations need to be tested using software that simulates communication partners, and that provides test data and test plans. Results In this paper we describe a software that is used to test systems that are involved in sharing medical images within the EHR. Our software is used as part of the Integrating the Healthcare Enterprise (IHE) testing process to test the Cross Enterprise Document Sharing for imaging (XDS-I) integration profile. We describe its architecture and functionalities; we also expose the challenges encountered and discuss the elected design solutions. Conclusions EHR is being deployed in several countries. The EHR infrastructure will be continuously evolving to embrace advances in the information technology domain. Our software is built on a web framework to allow for an easy evolution with web technology. The testing software is publicly available; it can be used by system implementers to test their implementations. It can also be used by site integrators to verify and test the interoperability of systems, or by developers to understand specifications ambiguities, or to resolve implementations difficulties. PMID:20858241
IHE cross-enterprise document sharing for imaging: interoperability testing software.
Noumeir, Rita; Renaud, Bérubé
2010-09-21
With the deployments of Electronic Health Records (EHR), interoperability testing in healthcare is becoming crucial. EHR enables access to prior diagnostic information in order to assist in health decisions. It is a virtual system that results from the cooperation of several heterogeneous distributed systems. Interoperability between peers is therefore essential. Achieving interoperability requires various types of testing. Implementations need to be tested using software that simulates communication partners, and that provides test data and test plans. In this paper we describe a software that is used to test systems that are involved in sharing medical images within the EHR. Our software is used as part of the Integrating the Healthcare Enterprise (IHE) testing process to test the Cross Enterprise Document Sharing for imaging (XDS-I) integration profile. We describe its architecture and functionalities; we also expose the challenges encountered and discuss the elected design solutions. EHR is being deployed in several countries. The EHR infrastructure will be continuously evolving to embrace advances in the information technology domain. Our software is built on a web framework to allow for an easy evolution with web technology. The testing software is publicly available; it can be used by system implementers to test their implementations. It can also be used by site integrators to verify and test the interoperability of systems, or by developers to understand specifications ambiguities, or to resolve implementations difficulties.
Seasonal and interannual variations of atmospheric CO2 and climate
NASA Astrophysics Data System (ADS)
Dettinger, Michael D.; Ghil, Michael
1998-02-01
Interannual variations of atmospheric CO2 concentrations at Mauna Loa are almost masked by the seasonal cycle and a strong trend; at the South Pole, the seasonal cycle is small and is almost lost in the trend and interannual variations. Singular-spectrum analysis (SSA) is used here to isolate and reconstruct interannual signals at both sites and to visualize recent decadal changes in the amplitude and phase of the seasonal cycle. Analysis of the Mauna Loa CO2 series illustrates a hastening of the CO2 seasonal cycle, a close temporal relation between Northern Hemisphere (NH) mean temperature trends and the amplitude of the seasonal CO2 cycle, and tentative ties between the latter and seasonality changes in temperature over the NH continents. Variations of the seasonal CO2 cycle at the South Pole differ from those at Mauna Loa: it is phase changes of the seasonal cycle at the South Pole, rather than amplitude changes, that parallel hemispheric and global temperature trends. The seasonal CO2 cycles exhibit earlier occurrences of the seasons by 7days at Mauna Loa and 18days at the South Pole. Interannual CO2 variations are shared at the two locations, appear to respond to tropical processes, and can be decomposed mostly into two periodicities, around (3years)
1 and (4years)
1, respectively. Joint SSA analyses of CO2 concentrations and tropical climate indices isolate a shared mode with a quasi-triennial (QT) period in which the CO2 and sea-surface temperature (SST) participation are in phase opposition. The other shared mode has a quasi-quadrennial (QQ) period and CO2 variations are in phase with the corresponding tropical SST variations throughout the tropics. Together these interannual modes exhibit a mean lag between tropical SSTs and CO2 variations of about 6 8months, with SST leading. Analysis of the QT and QQ signals in global gridded SSTs, joint SSA of CO2 and δ13C isotopic ratios, and SSA of CO2 and NH-land temperatures indicate that the QT variations in CO2 mostly reflect upwelling variations in the eastern tropical Pacific. QQ variations are dominated by the CO2 signature of terrestrial-ecosystem response to global QQ climate variations. Climate variations associated with these two interannual components of tropical variability have very different effects on global climate and, especially, on terrestrial ecosystems and the carbon cycle.
Dewidar, K; Thomas, J; Bayoumi, S
2016-07-01
Off-road vehicles can have a devastating impact on vegetation and soil. Here, we sought to quantify, through a combination of field vegetation, bulk soil, and image analyses, the impact of off-road vehicles on the vegetation and soils of Rawdat Al Shams, which is located in central Saudi Arabia. Soil compaction density was measured in the field, and 27 soil samples were collected for bulk density analysis in the lab to quantify the impacts of off-road vehicles. High spatial resolution images, such as those obtained by the satellites GeoEye-1 and IKONOS-2, were used for surveying the damage to vegetation cover and soil compaction caused by these vehicles. Vegetation cover was mapped using the Normalized Difference Vegetation Index (NDVI) technique based on high-resolution images taken at different times of the year. Vehicle trails were derived from satellite data via visual analysis. All damaged areas were determined from high-resolution image data. In this study, we conducted quantitative analyses of vegetation cover change, the impacts of vehicle trails (hereafter "trail impacts"), and a bulk soil analysis. Image data showed that both vegetation cover and trail impacts increased from 2008 to 2015, with the average percentage of trail impacts nearly equal to that of the percentage of vegetation cover during this period. Forty-six species of plants were found to be present in the study area, consisting of all types of life forms, yet trees were represented by a single species, Acacia gerrardii. Herbs composed the largest share of plant life, with 29 species, followed by perennial herbs (12 species), grasses (5 species), and shrubs (3 species). Analysis of soil bulk density for Rawdat Al Shams showed that off-road driving greatly impacts soil density. Twenty-two plant species were observed on the trails, the majority of which were ephemerals. Notoceras bicorne was the most common, with a frequency rate of 93.33 %, an abundance value of 78.47 %, and a density of 0.1 in transect 1, followed by Plantago ovata.
Enhancement of the Shared Graphics Workspace.
1987-12-31
participants to share videodisc images and computer graphics displayed in color and text and facsimile information displayed in black on amber. They...could annotate the information in up to five * colors and print the annotated version at both sites, using a standard fax machine. The SGWS also used a fax...system to display a document, whether text or photo, the camera scans the document, digitizes the data, and sends it via direct memory access (DMA) to
"I share, therefore I am": personality traits, life satisfaction, and Facebook check-ins.
Wang, Shaojung Sharon
2013-12-01
This study explored whether agreeableness, extraversion, and openness function to influence self-disclosure behavior, which in turn impacts the intensity of checking in on Facebook. A complete path from extraversion to Facebook check-in through self-disclosure and sharing was found. The indirect effect from sharing to check-in intensity through life satisfaction was particularly salient. The central component of check-in is for users to disclose a specific location selectively that has implications on demonstrating their social lives, lifestyles, and tastes, enabling a selective and optimized self-image. Implications on the hyperpersonal model and warranting principle are discussed.
Creating a culture of shared Governance begins with developing the nurse as scholar.
Donohue-Porter, Patricia
2012-01-01
The relationship between shared governance and nursing scholarship is investigated with an emphasis on the connection between stages of scholarly development and nursing action in the evolution of professional practice models. The scholarly image of nursing is described and four critical stages of scholarship (scholarly inquiry, conscious reflection, persistent critique, and intellectual creation) are presented. The development of nursing scholars is described with emphasis on intellectual virtues as described by philosophers and values as described by nursing theorists that are foundational to this process. Shared governance is viewed holistically as a true scholarly process when these elements are in place and are used by nurses.
Yagahara, Ayako; Yokooka, Yuki; Jiang, Guoqian; Tsuji, Shintarou; Fukuda, Akihisa; Nishimoto, Naoki; Kurowarabi, Kunio; Ogasawara, Katsuhiko
2018-03-01
Describing complex mammography examination processes is important for improving the quality of mammograms. It is often difficult for experienced radiologic technologists to explain the process because their techniques depend on their experience and intuition. In our previous study, we analyzed the process using a new bottom-up hierarchical task analysis and identified key components of the process. Leveraging the results of the previous study, the purpose of this study was to construct a mammographic examination process ontology to formally describe the relationships between the process and image evaluation criteria to improve the quality of mammograms. First, we identified and created root classes: task, plan, and clinical image evaluation (CIE). Second, we described an "is-a" relation referring to the result of the previous study and the structure of the CIE. Third, the procedural steps in the ontology were described using the new properties: "isPerformedBefore," "isPerformedAfter," and "isPerformedAfterIfNecessary." Finally, the relationships between tasks and CIEs were described using the "isAffectedBy" property to represent the influence of the process on image quality. In total, there were 219 classes in the ontology. By introducing new properties related to the process flow, a sophisticated mammography examination process could be visualized. In relationships between tasks and CIEs, it became clear that the tasks affecting the evaluation criteria related to positioning were greater in number than those for image quality. We developed a mammographic examination process ontology that makes knowledge explicit for a comprehensive mammography process. Our research will support education and help promote knowledge sharing about mammography examination expertise.
BreakingNews: Article Annotation by Image and Text Processing.
Ramisa, Arnau; Yan, Fei; Moreno-Noguer, Francesc; Mikolajczyk, Krystian
2018-05-01
Building upon recent Deep Neural Network architectures, current approaches lying in the intersection of Computer Vision and Natural Language Processing have achieved unprecedented breakthroughs in tasks like automatic captioning or image retrieval. Most of these learning methods, though, rely on large training sets of images associated with human annotations that specifically describe the visual content. In this paper we propose to go a step further and explore the more complex cases where textual descriptions are loosely related to the images. We focus on the particular domain of news articles in which the textual content often expresses connotative and ambiguous relations that are only suggested but not directly inferred from images. We introduce an adaptive CNN architecture that shares most of the structure for multiple tasks including source detection, article illustration and geolocation of articles. Deep Canonical Correlation Analysis is deployed for article illustration, and a new loss function based on Great Circle Distance is proposed for geolocation. Furthermore, we present BreakingNews, a novel dataset with approximately 100K news articles including images, text and captions, and enriched with heterogeneous meta-data (such as GPS coordinates and user comments). We show this dataset to be appropriate to explore all aforementioned problems, for which we provide a baseline performance using various Deep Learning architectures, and different representations of the textual and visual features. We report very promising results and bring to light several limitations of current state-of-the-art in this kind of domain, which we hope will help spur progress in the field.
A content analysis of thinspiration, fitspiration, and bonespiration imagery on social media.
Talbot, Catherine Victoria; Gavin, Jeffrey; van Steen, Tommy; Morey, Yvette
2017-01-01
On social media, images such as thinspiration, fitspiration, and bonespiration, are shared to inspire certain body ideals. Previous research has demonstrated that exposure to these groups of content is associated with increased body dissatisfaction and decreased self-esteem. It is therefore important that the bodies featured within these groups of content are more fully understood so that effective interventions and preventative measures can be informed, developed, and implemented. A content analysis was conducted on a sample of body-focussed images with the hashtags thinspiration, fitspiration, and bonespiration from three social media platforms. The analyses showed that thinspiration and bonespiration content contained more thin and objectified bodies, compared to fitspiration which featured a greater prevalence of muscles and muscular bodies. In addition, bonespiration content contained more bone protrusions and fewer muscles than thinspiration content. The findings suggest fitspiration may be a less unhealthy type of content; however, a subgroup of imagery was identified which idealised the extremely thin body type and as such this content should also be approached with caution. Future research should utilise qualitative methods to further develop understandings of the body ideals that are constructed within these groups of content and the motivations behind posting this content.
Rinewalt, Daniel; Williams, Betsy W; Reeves, Anthony P; Shah, Palmi; Hong, Edward; Mulshine, James L
2015-03-01
Higher resolution medical imaging platforms are rapidly emerging, but there is a challenge in applying these tools in a clinically meaningful way. The purpose of the current study was to evaluate a novel three-dimensional (3D) software imaging environment, known as interactive science publishing (ISP), in appraising 3D computed tomography images and to compare this approach with traditional planar (2D) imaging in a series of lung cancer cases. Twenty-four physician volunteers at different levels of training across multiple specialties were recruited to evaluate eight lung cancer-related clinical vignettes. The volunteers were asked to compare the performance of traditional 2D versus the ISP 3D imaging in assessing different visualization environments for diagnostic and measurement processes and to further evaluate the ISP tool in terms of general satisfaction, usability, and probable applicability. Volunteers were satisfied with both imaging methods; however, the 3D environment had significantly higher ratings. Measurement performance was comparable using both traditional 2D and 3D image evaluation. Physicians not trained in 2D measurement approaches versus those with such training demonstrated better performance with ISP and preferred working in the ISP environment. Recent postgraduates with only modest self-administered training performed equally well on 3D and 2D cases. This suggests that the 3D environment has no reduction in accuracy over the conventional 2D approach, while providing the advantage of a digital environment for cross-disciplinary interaction for shared problem solving. Exploration of more effective, efficient, self-directed training could potentially result in further improvement in image evaluation proficiency and potentially decrease training costs. Copyright © 2015. Published by Elsevier Inc.
Cost-Benefit Analysis of Implementing a Car-Sharing Model to the Navy’s Passenger Vehicle Fleet
2016-12-01
and Public Policy iv THIS PAGE INTENTIONALLY LEFT BLANK v COST - BENEFIT ANALYSIS OF IMPLEMENTING A CAR- SHARING MODEL TO THE NAVY’S PASSENGER...the public good. This CBA will be conducted using a federal government perspective and standing (whose costs and benefits will be counted) will be...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA MBA PROFESSIONAL REPORT COST - BENEFIT ANALYSIS OF IMPLEMENTING A CAR-SHARING
Radiologic image communication and archive service: a secure, scalable, shared approach
NASA Astrophysics Data System (ADS)
Fellingham, Linda L.; Kohli, Jagdish C.
1995-11-01
The Radiologic Image Communication and Archive (RICA) service is designed to provide a shared archive for medical images to the widest possible audience of customers. Images are acquired from a number of different modalities, each available from many different vendors. Images are acquired digitally from those modalities which support direct digital output and by digitizing films for projection x-ray exams. The RICA Central Archive receives standard DICOM 3.0 messages and data streams from the medical imaging devices at customer institutions over the public telecommunication network. RICA represents a completely scalable resource. The user pays only for what he is using today with the full assurance that as the volume of image data that he wishes to send to the archive increases, the capacity will be there to accept it. To provide this seamless scalability imposes several requirements on the RICA architecture: (1) RICA must support the full array of transport services. (2) The Archive Interface must scale cost-effectively to support local networks that range from the very small (one x-ray digitizer in a medical clinic) to the very large and complex (a large hospital with several CTs, MRs, Nuclear medicine devices, ultrasound machines, CRs, and x-ray digitizers). (3) The Archive Server must scale cost-effectively to support rapidly increasing demands for service providing storage for and access to millions of patients and hundreds of millions of images. The architecture must support the incorporation of improved technology as it becomes available to maintain performance and remain cost-effective as demand rises.
An automatic system to detect and extract texts in medical images for de-identification
NASA Astrophysics Data System (ADS)
Zhu, Yingxuan; Singh, P. D.; Siddiqui, Khan; Gillam, Michael
2010-03-01
Recently, there is an increasing need to share medical images for research purpose. In order to respect and preserve patient privacy, most of the medical images are de-identified with protected health information (PHI) before research sharing. Since manual de-identification is time-consuming and tedious, so an automatic de-identification system is necessary and helpful for the doctors to remove text from medical images. A lot of papers have been written about algorithms of text detection and extraction, however, little has been applied to de-identification of medical images. Since the de-identification system is designed for end-users, it should be effective, accurate and fast. This paper proposes an automatic system to detect and extract text from medical images for de-identification purposes, while keeping the anatomic structures intact. First, considering the text have a remarkable contrast with the background, a region variance based algorithm is used to detect the text regions. In post processing, geometric constraints are applied to the detected text regions to eliminate over-segmentation, e.g., lines and anatomic structures. After that, a region based level set method is used to extract text from the detected text regions. A GUI for the prototype application of the text detection and extraction system is implemented, which shows that our method can detect most of the text in the images. Experimental results validate that our method can detect and extract text in medical images with a 99% recall rate. Future research of this system includes algorithm improvement, performance evaluation, and computation optimization.
Levin-Schwartz, Yuri; Song, Yang; Schreier, Peter J.; Calhoun, Vince D.; Adalı, Tülay
2016-01-01
Due to their data-driven nature, multivariate methods such as canonical correlation analysis (CCA) have proven very useful for fusion of multimodal neurological data. However, being able to determine the degree of similarity between datasets and appropriate order selection are crucial to the success of such techniques. The standard methods for calculating the order of multimodal data focus only on sources with the greatest individual energy and ignore relations across datasets. Additionally, these techniques as well as the most widely-used methods for determining the degree of similarity between datasets assume sufficient sample support and are not effective in the sample-poor regime. In this paper, we propose to jointly estimate the degree of similarity between datasets and their order when few samples are present using principal component analysis and canonical correlation analysis (PCA-CCA). By considering these two problems simultaneously, we are able to minimize the assumptions placed on the data and achieve superior performance in the sample-poor regime compared to traditional techniques. We apply PCA-CCA to the pairwise combinations of functional magnetic resonance imaging (fMRI), structural magnetic resonance imaging (sMRI), and electroencephalogram (EEG) data drawn from patients with schizophrenia and healthy controls while performing an auditory oddball task. The PCA-CCA results indicate that the fMRI and sMRI datasets are the most similar, whereas the sMRI and EEG datasets share the least similarity. We also demonstrate that the degree of similarity obtained by PCA-CCA is highly predictive of the degree of significance found for components generated using CCA. PMID:27039696
NASA Astrophysics Data System (ADS)
Liu, Carol Y. B.; Luk, David C. K.; Zhou, Kany S. Y.; So, Bryan M. K.; Louie, Derek C. H.
2015-03-01
Due to the increasing incidences of malignant melanoma, there is a rising demand for assistive technologies for its early diagnosis and improving the survival rate. The commonly used visual screening method is with limited accuracy as the early phase of melanoma shares many clinical features with an atypical nevus, while conventional dermoscopes are not user-friendly in terms of setup time and operations. Therefore, the development of an intelligent and handy system to assist the accurate screening and long-term monitoring of melanocytic skin lesions is crucial for early diagnosis and prevention of melanoma. In this paper, an advanced design of non-invasive and non-radioactive dermoscopy system was reported. Computer-aided simulations were conducted for optimizing the optical design and uniform illumination distribution. Functional prototype and the software system were further developed, which could enable image capturing at 10x amplified and general modes, convenient data transmission, analysis of dermoscopic features (e.g., asymmetry, border irregularity, color, diameter and dermoscopic structure) for assisting the early detection of melanoma, extract patient information (e.g. code, lesion location) and integrate with dermoscopic images, thus further support long term monitoring of diagnostic analysis results. A clinical trial study was further conducted on 185 Chinese children (0-18 years old). The results showed that for all subjects, skin conditions diagnosed based on the developed system accurately confirmed the diagnoses by conventional clinical procedures. Besides, clinical analysis on dermoscopic features and a potential standard approach by the developed system to support identifying specific melanocytic patterns for dermoscopic examination in Chinese children were also reported.
Design of Scalable and Effective Earth Science Collaboration Tool
NASA Astrophysics Data System (ADS)
Maskey, M.; Ramachandran, R.; Kuo, K. S.; Lynnes, C.; Niamsuwan, N.; Chidambaram, C.
2014-12-01
Collaborative research is growing rapidly. Many tools including IDEs are now beginning to incorporate new collaborative features. Software engineering research has shown the effectiveness of collaborative programming and analysis. In particular, drastic reduction in software development time resulting in reduced cost has been highlighted. Recently, we have witnessed the rise of applications that allow users to share their content. Most of these applications scale such collaboration using cloud technologies. Earth science research needs to adopt collaboration technologies to reduce redundancy, cut cost, expand knowledgebase, and scale research experiments. To address these needs, we developed the Earth science collaboration workbench (CWB). CWB provides researchers with various collaboration features by augmenting their existing analysis tools to minimize learning curve. During the development of the CWB, we understood that Earth science collaboration tasks are varied and we concluded that it is not possible to design a tool that serves all collaboration purposes. We adopted a mix of synchronous and asynchronous sharing methods that can be used to perform collaboration across time and location dimensions. We have used cloud technology for scaling the collaboration. Cloud has been highly utilized and valuable tool for Earth science researchers. Among other usages, cloud is used for sharing research results, Earth science data, and virtual machine images; allowing CWB to create and maintain research environments and networks to enhance collaboration between researchers. Furthermore, collaborative versioning tool, Git, is integrated into CWB for versioning of science artifacts. In this paper, we present our experience in designing and implementing the CWB. We will also discuss the integration of collaborative code development use cases for data search and discovery using NASA DAAC and simulation of satellite observations using NASA Earth Observing System Simulation Suite (NEOS3).
Pang, Shaoning; Ban, Tao; Kadobayashi, Youki; Kasabov, Nikola K
2012-04-01
To adapt linear discriminant analysis (LDA) to real-world applications, there is a pressing need to equip it with an incremental learning ability to integrate knowledge presented by one-pass data streams, a functionality to join multiple LDA models to make the knowledge sharing between independent learning agents more efficient, and a forgetting functionality to avoid reconstruction of the overall discriminant eigenspace caused by some irregular changes. To this end, we introduce two adaptive LDA learning methods: LDA merging and LDA splitting. These provide the benefits of ability of online learning with one-pass data streams, retained class separability identical to the batch learning method, high efficiency for knowledge sharing due to condensed knowledge representation by the eigenspace model, and more preferable time and storage costs than traditional approaches under common application conditions. These properties are validated by experiments on a benchmark face image data set. By a case study on the application of the proposed method to multiagent cooperative learning and system alternation of a face recognition system, we further clarified the adaptability of the proposed methods to complex dynamic learning tasks.
Slow Feature Analysis on Retinal Waves Leads to V1 Complex Cells
Dähne, Sven; Wilbert, Niko; Wiskott, Laurenz
2014-01-01
The developing visual system of many mammalian species is partially structured and organized even before the onset of vision. Spontaneous neural activity, which spreads in waves across the retina, has been suggested to play a major role in these prenatal structuring processes. Recently, it has been shown that when employing an efficient coding strategy, such as sparse coding, these retinal activity patterns lead to basis functions that resemble optimal stimuli of simple cells in primary visual cortex (V1). Here we present the results of applying a coding strategy that optimizes for temporal slowness, namely Slow Feature Analysis (SFA), to a biologically plausible model of retinal waves. Previously, SFA has been successfully applied to model parts of the visual system, most notably in reproducing a rich set of complex-cell features by training SFA with quasi-natural image sequences. In the present work, we obtain SFA units that share a number of properties with cortical complex-cells by training on simulated retinal waves. The emergence of two distinct properties of the SFA units (phase invariance and orientation tuning) is thoroughly investigated via control experiments and mathematical analysis of the input-output functions found by SFA. The results support the idea that retinal waves share relevant temporal and spatial properties with natural visual input. Hence, retinal waves seem suitable training stimuli to learn invariances and thereby shape the developing early visual system such that it is best prepared for coding input from the natural world. PMID:24810948
... this page: //medlineplus.gov/ency/article/003330.htm CT scan To use the sharing features on this page, please enable JavaScript. A computed tomography (CT) scan is an imaging method that uses x- ...