Sample records for imaging system network

  1. Classification of Magneto-Optic Images using Neural Networks

    NASA Technical Reports Server (NTRS)

    Nath, Shridhar; Wincheski, Buzz; Fulton, Jim; Namkung, Min

    1994-01-01

    A real time imaging system with a neural network classifier has been incorporated on a Macintosh computer in conjunction with an MOI system. This system images rivets on aircraft aluminium structures using eddy currents and magnetic imaging. Moment invariant functions from the image of a rivet is used to train a multilayer perceptron neural network to classify the rivets as good or bad (rivets with cracks).

  2. Detailed description of the Mayo/IBM PACS

    NASA Astrophysics Data System (ADS)

    Gehring, Dale G.; Persons, Kenneth R.; Rothman, Melvyn L.; Salutz, James R.; Morin, Richard L.

    1991-07-01

    The Mayo Clinic and IBM/Rochester have jointly developed a picture archiving system (PACS) for use with Mayo's MRI and Neuro-CT imaging modalities. The system was developed to replace the imaging system's vendor-supplied magnetic tape archiving capability. The system consists of seven MR imagers and nine CT scanners, each interfaced to the PACS via IBM Personal System/2(tm) (PS/2) computers, which act as gateways from the imaging modality to the PACS network. The PAC system operates on the token-ring component of Mayo's city-wide local area network. Also on the PACS network are four optical storage subsystems used for image archival, three optical subsystems used for image retrieval, an IBM Application System/400(tm) (AS/400) computer used for database management and multiple PS/2-based image display systems and their image servers.

  3. Neural network face recognition using wavelets

    NASA Astrophysics Data System (ADS)

    Karunaratne, Passant V.; Jouny, Ismail I.

    1997-04-01

    The recognition of human faces is a phenomenon that has been mastered by the human visual system and that has been researched extensively in the domain of computer neural networks and image processing. This research is involved in the study of neural networks and wavelet image processing techniques in the application of human face recognition. The objective of the system is to acquire a digitized still image of a human face, carry out pre-processing on the image as required, an then, given a prior database of images of possible individuals, be able to recognize the individual in the image. The pre-processing segment of the system includes several procedures, namely image compression, denoising, and feature extraction. The image processing is carried out using Daubechies wavelets. Once the images have been passed through the wavelet-based image processor they can be efficiently analyzed by means of a neural network. A back- propagation neural network is used for the recognition segment of the system. The main constraints of the system is with regard to the characteristics of the images being processed. The system should be able to carry out effective recognition of the human faces irrespective of the individual's facial-expression, presence of extraneous objects such as head-gear or spectacles, and face/head orientation. A potential application of this face recognition system would be as a secondary verification method in an automated teller machine.

  4. Design Criteria For Networked Image Analysis System

    NASA Astrophysics Data System (ADS)

    Reader, Cliff; Nitteberg, Alan

    1982-01-01

    Image systems design is currently undergoing a metamorphosis from the conventional computing systems of the past into a new generation of special purpose designs. This change is motivated by several factors, notably among which is the increased opportunity for high performance with low cost offered by advances in semiconductor technology. Another key issue is a maturing in understanding of problems and the applicability of digital processing techniques. These factors allow the design of cost-effective systems that are functionally dedicated to specific applications and used in a utilitarian fashion. Following an overview of the above stated issues, the paper presents a top-down approach to the design of networked image analysis systems. The requirements for such a system are presented, with orientation toward the hospital environment. The three main areas are image data base management, viewing of image data and image data processing. This is followed by a survey of the current state of the art, covering image display systems, data base techniques, communications networks and software systems control. The paper concludes with a description of the functional subystems and architectural framework for networked image analysis in a production environment.

  5. Computer-aided diagnosis workstation and teleradiology network system for chest diagnosis using the web medical image conference system with a new information security solution

    NASA Astrophysics Data System (ADS)

    Satoh, Hitoshi; Niki, Noboru; Eguchi, Kenji; Ohmatsu, Hironobu; Kaneko, Masahiro; Kakinuma, Ryutaro; Moriyama, Noriyuki

    2010-03-01

    Diagnostic MDCT imaging requires a considerable number of images to be read. Moreover, the doctor who diagnoses a medical image is insufficient in Japan. Because of such a background, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images, a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification and a vertebra body analysis algorithm for quantitative evaluation of osteoporosis. We also have developed the teleradiology network system by using web medical image conference system. In the teleradiology network system, the security of information network is very important subjects. Our teleradiology network system can perform Web medical image conference in the medical institutions of a remote place using the web medical image conference system. We completed the basic proof experiment of the web medical image conference system with information security solution. We can share the screen of web medical image conference system from two or more web conference terminals at the same time. An opinion can be exchanged mutually by using a camera and a microphone that are connected with the workstation that builds in some diagnostic assistance methods. Biometric face authentication used on site of teleradiology makes "Encryption of file" and "Success in login" effective. Our Privacy and information security technology of information security solution ensures compliance with Japanese regulations. As a result, patients' private information is protected. Based on these diagnostic assistance methods, we have developed a new computer-aided workstation and a new teleradiology network that can display suspected lesions three-dimensionally in a short time. The results of this study indicate that our radiological information system without film by using computer-aided diagnosis workstation and our teleradiology network system can increase diagnostic speed, diagnostic accuracy and security improvement of medical information.

  6. Enhancement of digital radiography image quality using a convolutional neural network.

    PubMed

    Sun, Yuewen; Li, Litao; Cong, Peng; Wang, Zhentao; Guo, Xiaojing

    2017-01-01

    Digital radiography system is widely used for noninvasive security check and medical imaging examination. However, the system has a limitation of lower image quality in spatial resolution and signal to noise ratio. In this study, we explored whether the image quality acquired by the digital radiography system can be improved with a modified convolutional neural network to generate high-resolution images with reduced noise from the original low-quality images. The experiment evaluated on a test dataset, which contains 5 X-ray images, showed that the proposed method outperformed the traditional methods (i.e., bicubic interpolation and 3D block-matching approach) as measured by peak signal to noise ratio (PSNR) about 1.3 dB while kept highly efficient processing time within one second. Experimental results demonstrated that a residual to residual (RTR) convolutional neural network remarkably improved the image quality of object structural details by increasing the image resolution and reducing image noise. Thus, this study indicated that applying this RTR convolutional neural network system was useful to improve image quality acquired by the digital radiography system.

  7. An analysis of image storage systems for scalable training of deep neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Seung-Hwan; Young, Steven R; Patton, Robert M

    This study presents a principled empirical evaluation of image storage systems for training deep neural networks. We employ the Caffe deep learning framework to train neural network models for three different data sets, MNIST, CIFAR-10, and ImageNet. While training the models, we evaluate five different options to retrieve training image data: (1) PNG-formatted image files on local file system; (2) pushing pixel arrays from image files into a single HDF5 file on local file system; (3) in-memory arrays to hold the pixel arrays in Python and C++; (4) loading the training data into LevelDB, a log-structured merge tree based key-valuemore » storage; and (5) loading the training data into LMDB, a B+tree based key-value storage. The experimental results quantitatively highlight the disadvantage of using normal image files on local file systems to train deep neural networks and demonstrate reliable performance with key-value storage based storage systems. When training a model on the ImageNet dataset, the image file option was more than 17 times slower than the key-value storage option. Along with measurements on training time, this study provides in-depth analysis on the cause of performance advantages/disadvantages of each back-end to train deep neural networks. We envision the provided measurements and analysis will shed light on the optimal way to architect systems for training neural networks in a scalable manner.« less

  8. Completion of a Hospital-Wide Comprehensive Image Management and Communication System

    NASA Astrophysics Data System (ADS)

    Mun, Seong K.; Benson, Harold R.; Horii, Steven C.; Elliott, Larry P.; Lo, Shih-Chung B.; Levine, Betty A.; Braudes, Robert E.; Plumlee, Gabriel S.; Garra, Brian S.; Schellinger, Dieter; Majors, Bruce; Goeringer, Fred; Kerlin, Barbara D.; Cerva, John R.; Ingeholm, Mary-Lou; Gore, Tim

    1989-05-01

    A comprehensive image management and communication (IMAC) network has been installed at Georgetown University Hospital for an extensive clinical evaluation. The network is based on the AT&T CommView system and it includes interfaces to 12 imaging devices, 15 workstations (inside and outside of the radiology department), a teleradiology link to an imaging center, an optical jukebox and a number of advanced image display and processing systems such as Sun workstations, PIXAR, and PIXEL. Details of network configuration and its role in the evaluation project are discussed.

  9. Network-based reading system for lung cancer screening CT

    NASA Astrophysics Data System (ADS)

    Fujino, Yuichi; Fujimura, Kaori; Nomura, Shin-ichiro; Kawashima, Harumi; Tsuchikawa, Megumu; Matsumoto, Toru; Nagao, Kei-ichi; Uruma, Takahiro; Yamamoto, Shinji; Takizawa, Hotaka; Kuroda, Chikazumi; Nakayama, Tomio

    2006-03-01

    This research aims to support chest computed tomography (CT) medical checkups to decrease the death rate by lung cancer. We have developed a remote cooperative reading system for lung cancer screening over the Internet, a secure transmission function, and a cooperative reading environment. It is called the Network-based Reading System. A telemedicine system involves many issues, such as network costs and data security if we use it over the Internet, which is an open network. In Japan, broadband access is widespread and its cost is the lowest in the world. We developed our system considering human machine interface and security. It consists of data entry terminals, a database server, a computer aided diagnosis (CAD) system, and some reading terminals. It uses a secure Digital Imaging and Communication in Medicine (DICOM) encrypting method and Public Key Infrastructure (PKI) based secure DICOM image data distribution. We carried out an experimental trial over the Japan Gigabit Network (JGN), which is the testbed for the Japanese next-generation network, and conducted verification experiments of secure screening image distribution, some kinds of data addition, and remote cooperative reading. We found that network bandwidth of about 1.5 Mbps enabled distribution of screening images and cooperative reading and that the encryption and image distribution methods we proposed were applicable to the encryption and distribution of general DICOM images via the Internet.

  10. Sign Language Recognition System using Neural Network for Digital Hardware Implementation

    NASA Astrophysics Data System (ADS)

    Vargas, Lorena P.; Barba, Leiner; Torres, C. O.; Mattos, L.

    2011-01-01

    This work presents an image pattern recognition system using neural network for the identification of sign language to deaf people. The system has several stored image that show the specific symbol in this kind of language, which is employed to teach a multilayer neural network using a back propagation algorithm. Initially, the images are processed to adapt them and to improve the performance of discriminating of the network, including in this process of filtering, reduction and elimination noise algorithms as well as edge detection. The system is evaluated using the signs without including movement in their representation.

  11. Study of optical techniques for the Ames unitary wind tunnel: Digital image processing, part 6

    NASA Technical Reports Server (NTRS)

    Lee, George

    1993-01-01

    A survey of digital image processing techniques and processing systems for aerodynamic images has been conducted. These images covered many types of flows and were generated by many types of flow diagnostics. These include laser vapor screens, infrared cameras, laser holographic interferometry, Schlieren, and luminescent paints. Some general digital image processing systems, imaging networks, optical sensors, and image computing chips were briefly reviewed. Possible digital imaging network systems for the Ames Unitary Wind Tunnel were explored.

  12. A neural network approach for image reconstruction in electron magnetic resonance tomography.

    PubMed

    Durairaj, D Christopher; Krishna, Murali C; Murugesan, Ramachandran

    2007-10-01

    An object-oriented, artificial neural network (ANN) based, application system for reconstruction of two-dimensional spatial images in electron magnetic resonance (EMR) tomography is presented. The standard back propagation algorithm is utilized to train a three-layer sigmoidal feed-forward, supervised, ANN to perform the image reconstruction. The network learns the relationship between the 'ideal' images that are reconstructed using filtered back projection (FBP) technique and the corresponding projection data (sinograms). The input layer of the network is provided with a training set that contains projection data from various phantoms as well as in vivo objects, acquired from an EMR imager. Twenty five different network configurations are investigated to test the ability of the generalization of the network. The trained ANN then reconstructs two-dimensional temporal spatial images that present the distribution of free radicals in biological systems. Image reconstruction by the trained neural network shows better time complexity than the conventional iterative reconstruction algorithms such as multiplicative algebraic reconstruction technique (MART). The network is further explored for image reconstruction from 'noisy' EMR data and the results show better performance than the FBP method. The network is also tested for its ability to reconstruct from limited-angle EMR data set.

  13. GLOBECOM '89 - IEEE Global Telecommunications Conference and Exhibition, Dallas, TX, Nov. 27-30, 1989, Conference Record. Volumes 1, 2, & 3

    NASA Astrophysics Data System (ADS)

    The present conference discusses topics in multiwavelength network technology and its applications, advanced digital radio systems in their propagation environment, mobile radio communications, switching programmability, advancements in computer communications, integrated-network management and security, HDTV and image processing in communications, basic exchange communications radio advancements in digital switching, intelligent network evolution, speech coding for telecommunications, and multiple access communications. Also discussed are network designs for quality assurance, recent progress in coherent optical systems, digital radio applications, advanced communications technologies for mobile users, communication software for switching systems, AI and expert systems in network management, intelligent multiplexing nodes, video and image coding, network protocols and performance, system methods in quality and reliability, the design and simulation of lightwave systems, local radio networks, mobile satellite communications systems, fiber networks restoration, packet video networks, human interfaces for future networks, and lightwave networking.

  14. A fully convolutional networks (FCN) based image segmentation algorithm in binocular imaging system

    NASA Astrophysics Data System (ADS)

    Long, Zourong; Wei, Biao; Feng, Peng; Yu, Pengwei; Liu, Yuanyuan

    2018-01-01

    This paper proposes an image segmentation algorithm with fully convolutional networks (FCN) in binocular imaging system under various circumstance. Image segmentation is perfectly solved by semantic segmentation. FCN classifies the pixels, so as to achieve the level of image semantic segmentation. Different from the classical convolutional neural networks (CNN), FCN uses convolution layers instead of the fully connected layers. So it can accept image of arbitrary size. In this paper, we combine the convolutional neural network and scale invariant feature matching to solve the problem of visual positioning under different scenarios. All high-resolution images are captured with our calibrated binocular imaging system and several groups of test data are collected to verify this method. The experimental results show that the binocular images are effectively segmented without over-segmentation. With these segmented images, feature matching via SURF method is implemented to obtain regional information for further image processing. The final positioning procedure shows that the results are acceptable in the range of 1.4 1.6 m, the distance error is less than 10mm.

  15. Initial experience with a radiology imaging network to newborn and intensive care units.

    PubMed

    Witt, R M; Cohen, M D; Appledorn, C R

    1991-02-01

    A digital image network has been installed in the James Whitcomb Riley Hospital for Children on the Indiana University Medical Center to create a limited all digital imaging system. The system is composed of commercial components, Philips/AT&T CommView system, (Philips Medical Systems, Shelton, CT; AT&T Bell Laboratories, West Long Beach, NJ) and connects an existing Philips Computed Radiology (PCR) system to two remote workstations that reside in the intensive care unit and the newborn nursery. The purpose of the system is to display images obtained from the PCR system on the remote workstations for direct viewing by referring clinicians, and to reduce many of their visits to the radiology reading room three floors away. The design criteria includes the ability to centrally control all image management functions on the remote workstations to relieve the clinicians from any image management tasks except for recalling patient images. The principal components of the system are the Philips PCR system, the acquisition module (AM), and the PCR interface to the Data Management Module (DMM). Connected to the DMM are an Enhanced Graphics Display Workstation (EGDW), an optical disk drive, and a network gateway to an ethernet link. The ethernet network is the connection to the two Results Viewing Stations (RVS) and both RVSs are approximately 100 m from the gateway. The DMM acts as an image file server and an image archive device. The DMM manages the image data base and can load images to the EGDW and the two RVSs. The system has met the initial design specifications and can successfully capture images from the PCR and direct them to the RVSs.(ABSTRACT TRUNCATED AT 250 WORDS)

  16. Image processing system design for microcantilever-based optical readout infrared arrays

    NASA Astrophysics Data System (ADS)

    Tong, Qiang; Dong, Liquan; Zhao, Yuejin; Gong, Cheng; Liu, Xiaohua; Yu, Xiaomei; Yang, Lei; Liu, Weiyu

    2012-12-01

    Compared with the traditional infrared imaging technology, the new type of optical-readout uncooled infrared imaging technology based on MEMS has many advantages, such as low cost, small size, producing simple. In addition, the theory proves that the technology's high thermal detection sensitivity. So it has a very broad application prospects in the field of high performance infrared detection. The paper mainly focuses on an image capturing and processing system in the new type of optical-readout uncooled infrared imaging technology based on MEMS. The image capturing and processing system consists of software and hardware. We build our image processing core hardware platform based on TI's high performance DSP chip which is the TMS320DM642, and then design our image capturing board based on the MT9P031. MT9P031 is Micron's company high frame rate, low power consumption CMOS chip. Last we use Intel's company network transceiver devices-LXT971A to design the network output board. The software system is built on the real-time operating system DSP/BIOS. We design our video capture driver program based on TI's class-mini driver and network output program based on the NDK kit for image capturing and processing and transmitting. The experiment shows that the system has the advantages of high capturing resolution and fast processing speed. The speed of the network transmission is up to 100Mbps.

  17. Satellite image analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Sheldon, Roger A.

    1990-01-01

    The tremendous backlog of unanalyzed satellite data necessitates the development of improved methods for data cataloging and analysis. Ford Aerospace has developed an image analysis system, SIANN (Satellite Image Analysis using Neural Networks) that integrates the technologies necessary to satisfy NASA's science data analysis requirements for the next generation of satellites. SIANN will enable scientists to train a neural network to recognize image data containing scenes of interest and then rapidly search data archives for all such images. The approach combines conventional image processing technology with recent advances in neural networks to provide improved classification capabilities. SIANN allows users to proceed through a four step process of image classification: filtering and enhancement, creation of neural network training data via application of feature extraction algorithms, configuring and training a neural network model, and classification of images by application of the trained neural network. A prototype experimentation testbed was completed and applied to climatological data.

  18. Review of Medical Image Classification using the Adaptive Neuro-Fuzzy Inference System

    PubMed Central

    Hosseini, Monireh Sheikh; Zekri, Maryam

    2012-01-01

    Image classification is an issue that utilizes image processing, pattern recognition and classification methods. Automatic medical image classification is a progressive area in image classification, and it is expected to be more developed in the future. Because of this fact, automatic diagnosis can assist pathologists by providing second opinions and reducing their workload. This paper reviews the application of the adaptive neuro-fuzzy inference system (ANFIS) as a classifier in medical image classification during the past 16 years. ANFIS is a fuzzy inference system (FIS) implemented in the framework of an adaptive fuzzy neural network. It combines the explicit knowledge representation of an FIS with the learning power of artificial neural networks. The objective of ANFIS is to integrate the best features of fuzzy systems and neural networks. A brief comparison with other classifiers, main advantages and drawbacks of this classifier are investigated. PMID:23493054

  19. The 3-D image recognition based on fuzzy neural network technology

    NASA Technical Reports Server (NTRS)

    Hirota, Kaoru; Yamauchi, Kenichi; Murakami, Jun; Tanaka, Kei

    1993-01-01

    Three dimensional stereoscopic image recognition system based on fuzzy-neural network technology was developed. The system consists of three parts; preprocessing part, feature extraction part, and matching part. Two CCD color camera image are fed to the preprocessing part, where several operations including RGB-HSV transformation are done. A multi-layer perception is used for the line detection in the feature extraction part. Then fuzzy matching technique is introduced in the matching part. The system is realized on SUN spark station and special image input hardware system. An experimental result on bottle images is also presented.

  20. Image/video understanding systems based on network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-03-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.

  1. Interhospital network system using the worldwide web and the common gateway interface.

    PubMed

    Oka, A; Harima, Y; Nakano, Y; Tanaka, Y; Watanabe, A; Kihara, H; Sawada, S

    1999-05-01

    We constructed an interhospital network system using the worldwide web (WWW) and the Common Gateway Interface (CGI). Original clinical images are digitized and stored as a database for educational and research purposes. Personal computers (PCs) are available for data treatment and browsing. Our system is simple, as digitized images are stored into a Unix server machine. Images of important and interesting clinical cases are selected and registered into the image database using CGI. The main image format is 8- or 12-bit Joint Photographic Experts Group (JPEG) image. Original clinical images are finally stored in CD-ROM using a CD recorder. The image viewer can browse all of the images for one case at once as thumbnail pictures; image quality can be selected depending on the user's purpose. Using the network system, clinical images of interesting cases can be rapidly transmitted and discussed with other related hospitals. Data transmission from relational hospitals takes 1 to 2 minutes per 500 Kbyte of data. More distant hospitals (e.g., Rakusai Hospital, Kyoto) takes 1 minute more. The mean number of accesses our image database in a recent 3-month period was 470. There is a total about 200 cases in our image database, acquired over the past 2 years. Our system is useful for communication and image treatment between hospitals and we will describe the elements of our system and image database.

  2. The National Institutes of Health Clinical Center Digital Imaging Network, Picture Archival and Communication System, and Radiology Information System.

    PubMed

    Goldszal, A F; Brown, G K; McDonald, H J; Vucich, J J; Staab, E V

    2001-06-01

    In this work, we describe the digital imaging network (DIN), picture archival and communication system (PACS), and radiology information system (RIS) currently being implemented at the Clinical Center, National Institutes of Health (NIH). These systems are presently in clinical operation. The DIN is a redundant meshed network designed to address gigabit density and expected high bandwidth requirements for image transfer and server aggregation. The PACS projected workload is 5.0 TB of new imaging data per year. Its architecture consists of a central, high-throughput Digital Imaging and Communications in Medicine (DICOM) data repository and distributed redundant array of inexpensive disks (RAID) servers employing fiber-channel technology for immediate delivery of imaging data. On demand distribution of images and reports to clinicians and researchers is accomplished via a clustered web server. The RIS follows a client-server model and provides tools to order exams, schedule resources, retrieve and review results, and generate management reports. The RIS-hospital information system (HIS) interfaces include admissions, discharges, and transfers (ATDs)/demographics, orders, appointment notifications, doctors update, and results.

  3. Development of a system for transferring images via a network: supporting a regional liaison.

    PubMed

    Mihara, Naoki; Manabe, Shiro; Takeda, Toshihiro; Shinichirou, Kitamura; Junichi, Murakami; Kouji, Kiso; Matsumura, Yasushi

    2013-01-01

    We developed a system that transfers images via network and started using them in our hospital's PACS (Picture Archiving and Communication Systems) in 2006. We are pleased to report that the system has been re-developed and has been running so that there will be a regional liaison in the future. It has become possible to automatically transfer images simply by selecting the destination hospital that is registered in advance at the relay server. The gateway of this system can send images to a multi-center, relay management server, which receives the images and resends them. This system has the potential to be useful for image exchange, and to serve as a regional medical liaison.

  4. Image-Based Environmental Monitoring Sensor Application Using an Embedded Wireless Sensor Network

    PubMed Central

    Paek, Jeongyeup; Hicks, John; Coe, Sharon; Govindan, Ramesh

    2014-01-01

    This article discusses the experiences from the development and deployment of two image-based environmental monitoring sensor applications using an embedded wireless sensor network. Our system uses low-power image sensors and the Tenet general purpose sensing system for tiered embedded wireless sensor networks. It leverages Tenet's built-in support for reliable delivery of high rate sensing data, scalability and its flexible scripting language, which enables mote-side image compression and the ease of deployment. Our first deployment of a pitfall trap monitoring application at the James San Jacinto Mountain Reserve provided us with insights and lessons learned into the deployment of and compression schemes for these embedded wireless imaging systems. Our three month-long deployment of a bird nest monitoring application resulted in over 100,000 images collected from a 19-camera node network deployed over an area of 0.05 square miles, despite highly variable environmental conditions. Our biologists found the on-line, near-real-time access to images to be useful for obtaining data on answering their biological questions. PMID:25171121

  5. Image-based environmental monitoring sensor application using an embedded wireless sensor network.

    PubMed

    Paek, Jeongyeup; Hicks, John; Coe, Sharon; Govindan, Ramesh

    2014-08-28

    This article discusses the experiences from the development and deployment of two image-based environmental monitoring sensor applications using an embedded wireless sensor network. Our system uses low-power image sensors and the Tenet general purpose sensing system for tiered embedded wireless sensor networks. It leverages Tenet's built-in support for reliable delivery of high rate sensing data, scalability and its flexible scripting language, which enables mote-side image compression and the ease of deployment. Our first deployment of a pitfall trap monitoring application at the James San Cannot Mountain Reserve provided us with insights and lessons learned into the deployment of and compression schemes for these embedded wireless imaging systems. Our three month-long deployment of a bird nest monitoring application resulted in over 100,000 images collected from a 19-camera node network deployed over an area of 0.05 square miles, despite highly variable environmental conditions. Our biologists found the on-line, near-real-time access to images to be useful for obtaining data on answering their biological questions.

  6. Extraction of texture features with a multiresolution neural network

    NASA Astrophysics Data System (ADS)

    Lepage, Richard; Laurendeau, Denis; Gagnon, Roger A.

    1992-09-01

    Texture is an important surface characteristic. Many industrial materials such as wood, textile, or paper are best characterized by their texture. Detection of defaults occurring on such materials or classification for quality control anD matching can be carried out through careful texture analysis. A system for the classification of pieces of wood used in the furniture industry is proposed. This paper is concerned with a neural network implementation of the features extraction and classification components of the proposed system. Texture appears differently depending at which spatial scale it is observed. A complete description of a texture thus implies an analysis at several spatial scales. We propose a compact pyramidal representation of the input image for multiresolution analysis. The feature extraction system is implemented on a multilayer artificial neural network. Each level of the pyramid, which is a representation of the input image at a given spatial resolution scale, is mapped into a layer of the neural network. A full resolution texture image is input at the base of the pyramid and a representation of the texture image at multiple resolutions is generated by the feedforward pyramid structure of the neural network. The receptive field of each neuron at a given pyramid level is preprogrammed as a discrete Gaussian low-pass filter. Meaningful characteristics of the textured image must be extracted if a good resolving power of the classifier must be achieved. Local dominant orientation is the principal feature which is extracted from the textured image. Local edge orientation is computed with a Sobel mask at four orientation angles (multiple of (pi) /4). The resulting intrinsic image, that is, the local dominant orientation image, is fed to the texture classification neural network. The classification network is a three-layer feedforward back-propagation neural network.

  7. High-speed on-chip windowed centroiding using photodiode-based CMOS imager

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor); Sun, Chao (Inventor); Yang, Guang (Inventor); Cunningham, Thomas J. (Inventor); Hancock, Bruce (Inventor)

    2003-01-01

    A centroid computation system is disclosed. The system has an imager array, a switching network, computation elements, and a divider circuit. The imager array has columns and rows of pixels. The switching network is adapted to receive pixel signals from the image array. The plurality of computation elements operates to compute inner products for at least x and y centroids. The plurality of computation elements has only passive elements to provide inner products of pixel signals the switching network. The divider circuit is adapted to receive the inner products and compute the x and y centroids.

  8. High-speed on-chip windowed centroiding using photodiode-based CMOS imager

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor); Sun, Chao (Inventor); Yang, Guang (Inventor); Cunningham, Thomas J. (Inventor); Hancock, Bruce (Inventor)

    2004-01-01

    A centroid computation system is disclosed. The system has an imager array, a switching network, computation elements, and a divider circuit. The imager array has columns and rows of pixels. The switching network is adapted to receive pixel signals from the image array. The plurality of computation elements operates to compute inner products for at least x and y centroids. The plurality of computation elements has only passive elements to provide inner products of pixel signals the switching network. The divider circuit is adapted to receive the inner products and compute the x and y centroids.

  9. 1995 Joseph E. Whitley, MD, Award. A World Wide Web gateway to the radiologic learning file.

    PubMed

    Channin, D S

    1995-12-01

    Computer networks in general, and the Internet specifically, are changing the way information is manipulated in the world at large and in radiology. The goal of this project was to develop a computer system in which images from the Radiologic Learning File, available previously only via a single-user laser disc, are made available over a generic, high-availability computer network to many potential users simultaneously. Using a networked workstation in our laboratory and freely available distributed hypertext software, we established a World Wide Web (WWW) information server for radiology. Images from the Radiologic Learning File are requested through the WWW client software, digitized from a single laser disc containing the entire teaching file and then transmitted over the network to the client. The text accompanying each image is incorporated into the transmitted document. The Radiologic Learning File is now on-line, and requests to view the cases result in the delivery of the text and images. Image digitization via a frame grabber takes 1/30th of a second. Conversion of the image to a standard computer graphic format takes 45-60 sec. Text and image transmission speed on a local area network varies between 200 and 400 kilobytes (KB) per second depending on the network load. We have made images from a laser disc of the Radiologic Learning File available through an Internet-based hypertext server. The images previously available through a single-user system located in a remote section of our department are now ubiquitously available throughout our department via the department's computer network. We have thus converted a single-user, limited functionality system into a multiuser, widely available resource.

  10. Image texture segmentation using a neural network

    NASA Astrophysics Data System (ADS)

    Sayeh, Mohammed R.; Athinarayanan, Ragu; Dhali, Pushpuak

    1992-09-01

    In this paper we use a neural network called the Lyapunov associative memory (LYAM) system to segment image texture into different categories or clusters. The LYAM system is constructed by a set of ordinary differential equations which are simulated on a digital computer. The clustering can be achieved by using a single tuning parameter in the simplest model. Pattern classes are represented by the stable equilibrium states of the system. Design of the system is based on synthesizing two local energy functions, namely, the learning and recall energy functions. Before the implementation of the segmentation process, a Gauss-Markov random field (GMRF) model is applied to the raw image. This application suitably reduces the image data and prepares the texture information for the neural network process. We give a simple image example illustrating the capability of the technique. The GMRF-generated features are also used for a clustering, based on the Euclidean distance.

  11. Performance benefits and limitations of a camera network

    NASA Astrophysics Data System (ADS)

    Carr, Peter; Thomas, Paul J.; Hornsey, Richard

    2005-06-01

    Visual information is of vital significance to both animals and artificial systems. The majority of mammals rely on two images, each with a resolution of 107-108 'pixels' per image. At the other extreme are insect eyes where the field of view is segmented into 103-105 images, each comprising effectively one pixel/image. The great majority of artificial imaging systems lie nearer to the mammalian characteristics in this parameter space, although electronic compound eyes have been developed in this laboratory and elsewhere. If the definition of a vision system is expanded to include networks or swarms of sensor elements, then schools of fish, flocks of birds and ant or termite colonies occupy a region where the number of images and the pixels/image may be comparable. A useful system might then have 105 imagers, each with about 104-105 pixels. Artificial analogs to these situations include sensor webs, smart dust and co-ordinated robot clusters. As an extreme example, we might consider the collective vision system represented by the imminent existence of ~109 cellular telephones, each with a one-megapixel camera. Unoccupied regions in this resolution-segmentation parameter space suggest opportunities for innovative artificial sensor network systems. Essential for the full exploitation of these opportunities is the availability of custom CMOS image sensor chips whose characteristics can be tailored to the application. Key attributes of such a chip set might include integrated image processing and control, low cost, and low power. This paper compares selected experimentally determined system specifications for an inward-looking array of 12 cameras with the aid of a camera-network model developed to explore the tradeoff between camera resolution and the number of cameras.

  12. A network-based training environment: a medical image processing paradigm.

    PubMed

    Costaridou, L; Panayiotakis, G; Sakellaropoulos, P; Cavouras, D; Dimopoulos, J

    1998-01-01

    The capability of interactive multimedia and Internet technologies is investigated with respect to the implementation of a distance learning environment. The system is built according to a client-server architecture, based on the Internet infrastructure, composed of server nodes conceptually modelled as WWW sites. Sites are implemented by customization of available components. The environment integrates network-delivered interactive multimedia courses, network-based tutoring, SIG support, information databases of professional interest, as well as course and tutoring management. This capability has been demonstrated by means of an implemented system, validated with digital image processing content, specifically image enhancement. Image enhancement methods are theoretically described and applied to mammograms. Emphasis is given to the interactive presentation of the effects of algorithm parameters on images. The system end-user access depends on available bandwidth, so high-speed access can be achieved via LAN or local ISDN connections. Network based training offers new means of improved access and sharing of learning resources and expertise, as promising supplements in training.

  13. Using digital watermarking to enhance security in wireless medical image transmission.

    PubMed

    Giakoumaki, Aggeliki; Perakis, Konstantinos; Banitsas, Konstantinos; Giokas, Konstantinos; Tachakra, Sapal; Koutsouris, Dimitris

    2010-04-01

    During the last few years, wireless networks have been increasingly used both inside hospitals and in patients' homes to transmit medical information. In general, wireless networks suffer from decreased security. However, digital watermarking can be used to secure medical information. In this study, we focused on combining wireless transmission and digital watermarking technologies to better secure the transmission of medical images within and outside the hospital. We utilized an integrated system comprising the wireless network and the digital watermarking module to conduct a series of tests. The test results were evaluated by medical consultants. They concluded that the images suffered no visible quality degradation and maintained their diagnostic integrity. The proposed integrated system presented reasonable stability, and its performance was comparable to that of a fixed network. This system can enhance security during the transmission of medical images through a wireless channel.

  14. A pathologist-designed imaging system for anatomic pathology signout, teaching, and research.

    PubMed

    Schubert, E; Gross, W; Siderits, R H; Deckenbaugh, L; He, F; Becich, M J

    1994-11-01

    Pathology images are derived from gross surgical specimens, light microscopy, immunofluorescence, electron microscopy, molecular diagnostic gels, flow cytometry, image analysis data, and clinical laboratory data in graphic form. We have implemented a network of desktop personal computers (PCs) that allow us to easily capture, store, and retrieve gross and microscopic, anatomic, and research pathology images. System architecture involves multiple image acquisition and retrieval sites and a central file server for storage. The digitized images are conveyed via a local area network to and from image capture or display stations. Acquisition sites consist of a high-resolution camera connected to a frame grabber card in a 486-type personal computer, equipped with 16 MB (Table 1) RAM, a 1.05-gigabyte hard drive, and a 32-bit ethernet card for access to our anatomic pathology reporting system. We have designed a push-button workstation for acquiring and indexing images that does not significantly interfere with surgical pathology sign-out. Advantages of the system include the following: (1) Improving patient care: the availability of gross images at time of microscopic sign-out, verification of recurrence of malignancy from archived images, monitoring of bone marrow engraftment and immunosuppressive intervention after bone marrow/solid organ transplantation on repeat biopsies, and ability to seek instantaneous consultation with any pathologist on the network; (2) enhancing the teaching environment: building a digital surgical pathology atlas, improving the availability of images for conference support, and sharing cases across the network; (3) enhancing research: case study compilation, metastudy analysis, and availability of digitized images for quantitative analysis and permanent/reusable image records for archival study; and (4) other practical and economic considerations: storing case requisition images and hand-drawn diagrams deters the spread of gross room contaminants and results in considerable cost savings in photographic media for conferences, improved quality assurance by porting control stains across the network, and a multiplicity of other advantages that enhance image and information management in pathology.

  15. Network of fully integrated multispecialty hospital imaging systems

    NASA Astrophysics Data System (ADS)

    Dayhoff, Ruth E.; Kuzmak, Peter M.

    1994-05-01

    The Department of Veterans Affairs (VA) DHCP Imaging System records clinically significant diagnostic images selected by medical specialists in a variety of departments, including radiology, cardiology, gastroenterology, pathology, dermatology, hematology, surgery, podiatry, dental clinic, and emergency room. These images are displayed on workstations located throughout a medical center. All images are managed by the VA's hospital information system, allowing integrated displays of text and image data across medical specialties. Clinicians can view screens of `thumbnail' images for all studies or procedures performed on a selected patient. Two VA medical centers currently have DHCP Imaging Systems installed, and others are planned. All VA medical centers and other VA facilities are connected by a wide area packet-switched network. The VA's electronic mail software has been modified to allow inclusion of binary data such as images in addition to the traditional text data. Testing of this multimedia electronic mail system is underway for medical teleconsultation.

  16. Relative Panoramic Camera Position Estimation for Image-Based Virtual Reality Networks in Indoor Environments

    NASA Astrophysics Data System (ADS)

    Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.

    2017-09-01

    Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.

  17. Automatic identification of species with neural networks.

    PubMed

    Hernández-Serna, Andrés; Jiménez-Segura, Luz Fernanda

    2014-01-01

    A new automatic identification system using photographic images has been designed to recognize fish, plant, and butterfly species from Europe and South America. The automatic classification system integrates multiple image processing tools to extract the geometry, morphology, and texture of the images. Artificial neural networks (ANNs) were used as the pattern recognition method. We tested a data set that included 740 species and 11,198 individuals. Our results show that the system performed with high accuracy, reaching 91.65% of true positive fish identifications, 92.87% of plants and 93.25% of butterflies. Our results highlight how the neural networks are complementary to species identification.

  18. Detection of Oil Chestnuts Infected by Blue Mold Using Near-Infrared Hyperspectral Imaging Combined with Artificial Neural Networks.

    PubMed

    Feng, Lei; Zhu, Susu; Lin, Fucheng; Su, Zhenzhu; Yuan, Kangpei; Zhao, Yiying; He, Yong; Zhang, Chu

    2018-06-15

    Mildew damage is a major reason for chestnut poor quality and yield loss. In this study, a near-infrared hyperspectral imaging system in the 874⁻1734 nm spectral range was applied to detect the mildew damage to chestnuts caused by blue mold. Principal component analysis (PCA) scored images were firstly employed to qualitatively and intuitively distinguish moldy chestnuts from healthy chestnuts. Spectral data were extracted from the hyperspectral images. A successive projections algorithm (SPA) was used to select 12 optimal wavelengths. Artificial neural networks, including back propagation neural network (BPNN), evolutionary neural network (ENN), extreme learning machine (ELM), general regression neural network (GRNN) and radial basis neural network (RBNN) were used to build models using the full spectra and optimal wavelengths to distinguish moldy chestnuts. BPNN and ENN models using full spectra and optimal wavelengths obtained satisfactory performances, with classification accuracies all surpassing 99%. The results indicate the potential for the rapid and non-destructive detection of moldy chestnuts by hyperspectral imaging, which would help to develop online detection system for healthy and blue mold infected chestnuts.

  19. Web-based monitoring and management system for integrated enterprise-wide imaging networks

    NASA Astrophysics Data System (ADS)

    Ma, Keith; Slik, David; Lam, Alvin; Ng, Won

    2003-05-01

    Mass proliferation of IP networks and the maturity of standards has enabled the creation of sophisticated image distribution networks that operate over Intranets, Extranets, Communities of Interest (CoI) and even the public Internet. Unified monitoring, provisioning and management of such systems at the application and protocol levels represent a challenge. This paper presents a web based monitoring and management tool that employs established telecom standards for the creation of an open system that enables proactive management, provisioning and monitoring of image management systems at the enterprise level and across multi-site geographically distributed deployments. Utilizing established standards including ITU-T M.3100, and web technologies such as XML/XSLT, JSP/JSTL, and J2SE, the system allows for seamless device and protocol adaptation between multiple disparate devices. The goal has been to develop a unified interface that provides network topology views, multi-level customizable alerts, real-time fault detection as well as real-time and historical reporting of all monitored resources, including network connectivity, system load, DICOM transactions and storage capacities.

  20. Local area networks in an imaging environment.

    PubMed

    Noz, M E; Maguire, G Q; Erdman, W A

    1986-01-01

    There is great interest at present in incorporating image-management systems popularly referred to as picture archiving and communication systems (PACS) into imaging departments. This paper will describe various aspects of local area networks (LANs) for medical images and will give a definition of terms and classification of devices by describing a possible system which links various digital image sources through a high-speed data link and a common image format, allows for viewing and processing of all images produced within the complex, and eliminates the transport of films. The status of standards governing LAN and particularly PACS systems along with a proposed image exchange format will be given. Prototype systems, particularly a system for nuclear medicine images, will be presented, as well as the prospects for the immediate future in terms of installations started and commercial products available. A survey of the many questions that arise in the development of a PACS for medical images and also a survey of the presently suggested/adopted answers will be given.

  1. Implementation of remote monitoring and managing switches

    NASA Astrophysics Data System (ADS)

    Leng, Junmin; Fu, Guo

    2010-12-01

    In order to strengthen the safety performance of the network and provide the big convenience and efficiency for the operator and the manager, the system of remote monitoring and managing switches has been designed and achieved using the advanced network technology and present network resources. The fast speed Internet Protocol Cameras (FS IP Camera) is selected, which has 32-bit RSIC embedded processor and can support a number of protocols. An Optimal image compress algorithm Motion-JPEG is adopted so that high resolution images can be transmitted by narrow network bandwidth. The architecture of the whole monitoring and managing system is designed and implemented according to the current infrastructure of the network and switches. The control and administrative software is projected. The dynamical webpage Java Server Pages (JSP) development platform is utilized in the system. SQL (Structured Query Language) Server database is applied to save and access images information, network messages and users' data. The reliability and security of the system is further strengthened by the access control. The software in the system is made to be cross-platform so that multiple operating systems (UNIX, Linux and Windows operating systems) are supported. The application of the system can greatly reduce manpower cost, and can quickly find and solve problems.

  2. Field testing of a remote controlled robotic tele-echo system in an ambulance using broadband mobile communication technology.

    PubMed

    Takeuchi, Ryohei; Harada, Hiroshi; Masuda, Kohji; Ota, Gen-ichiro; Yokoi, Masaki; Teramura, Nobuyasu; Saito, Tomoyuki

    2008-06-01

    We report the testing of a mobile Robotic Tele-echo system that was placed in an ambulance and successfully transmitted clear real time echo imaging of a patient's abdomen to the destination hospital from where this device was being remotely operated. Two-way communication between the paramedics in this vehicle and a doctor standing by at the hospital was undertaken. The robot was equipped with an ultrasound probe which was remotely controlled by the clinician at the hospital and ultrasound images of the patient were transmitted wirelessly. The quality of the ultrasound images that were transmitted over the public mobile telephone networks and those transmitted over the Multimedia Wireless Access Network (a private networks) were compared. The transmission rate over the public networks and the private networks was approximately 256 Kbps, 3 Mbps respectively. Our results indicate that ultrasound images of far higher definition could be obtained through the private networks.

  3. Crystal surface analysis using matrix textural features classified by a probabilistic neural network

    NASA Astrophysics Data System (ADS)

    Sawyer, Curry R.; Quach, Viet; Nason, Donald; van den Berg, Lodewijk

    1991-12-01

    A system is under development in which surface quality of a growing bulk mercuric iodide crystal is monitored by video camera at regular intervals for early detection of growth irregularities. Mercuric iodide single crystals are employed in radiation detectors. A microcomputer system is used for image capture and processing. The digitized image is divided into multiple overlapping sub-images and features are extracted from each sub-image based on statistical measures of the gray tone distribution, according to the method of Haralick. Twenty parameters are derived from each sub-image and presented to a probabilistic neural network (PNN) for classification. This number of parameters was found to be optimal for the system. The PNN is a hierarchical, feed-forward network that can be rapidly reconfigured as additional training data become available. Training data is gathered by reviewing digital images of many crystals during their growth cycle and compiling two sets of images, those with and without irregularities.

  4. Cloud-based image sharing network for collaborative imaging diagnosis and consultation

    NASA Astrophysics Data System (ADS)

    Yang, Yuanyuan; Gu, Yiping; Wang, Mingqing; Sun, Jianyong; Li, Ming; Zhang, Weiqiang; Zhang, Jianguo

    2018-03-01

    In this presentation, we presented a new approach to design cloud-based image sharing network for collaborative imaging diagnosis and consultation through Internet, which can enable radiologists, specialists and physicians locating in different sites collaboratively and interactively to do imaging diagnosis or consultation for difficult or emergency cases. The designed network combined a regional RIS, grid-based image distribution management, an integrated video conferencing system and multi-platform interactive image display devices together with secured messaging and data communication. There are three kinds of components in the network: edge server, grid-based imaging documents registry and repository, and multi-platform display devices. This network has been deployed in a public cloud platform of Alibaba through Internet since March 2017 and used for small lung nodule or early staging lung cancer diagnosis services between Radiology departments of Huadong hospital in Shanghai and the First Hospital of Jiaxing in Zhejiang Province.

  5. On-demand server-side image processing for web-based DICOM image display

    NASA Astrophysics Data System (ADS)

    Sakusabe, Takaya; Kimura, Michio; Onogi, Yuzo

    2000-04-01

    Low cost image delivery is needed in modern networked hospitals. If a hospital has hundreds of clients, cost of client systems is a big problem. Naturally, a Web-based system is the most effective solution. But a Web browser could not display medical images with certain image processing such as a lookup table transformation. We developed a Web-based medical image display system using Web browser and on-demand server-side image processing. All images displayed on a Web page are generated from DICOM files on a server, delivered on-demand. User interaction on the Web page is handled by a client-side scripting technology such as JavaScript. This combination makes a look-and-feel of an imaging workstation not only for its functionality but also for its speed. Real time update of images with tracing mouse motion is achieved on Web browser without any client-side image processing which may be done by client-side plug-in technology such as Java Applets or ActiveX. We tested performance of the system in three cases. Single client, small number of clients in a fast speed network, and large number of clients in a normal speed network. The result shows that there are very slight overhead for communication and very scalable in number of clients.

  6. Image sensor system with bio-inspired efficient coding and adaptation.

    PubMed

    Okuno, Hirotsugu; Yagi, Tetsuya

    2012-08-01

    We designed and implemented an image sensor system equipped with three bio-inspired coding and adaptation strategies: logarithmic transform, local average subtraction, and feedback gain control. The system comprises a field-programmable gate array (FPGA), a resistive network, and active pixel sensors (APS), whose light intensity-voltage characteristics are controllable. The system employs multiple time-varying reset voltage signals for APS in order to realize multiple logarithmic intensity-voltage characteristics, which are controlled so that the entropy of the output image is maximized. The system also employs local average subtraction and gain control in order to obtain images with an appropriate contrast. The local average is calculated by the resistive network instantaneously. The designed system was successfully used to obtain appropriate images of objects that were subjected to large changes in illumination.

  7. Convolutional neural network-based classification system design with compressed wireless sensor network images.

    PubMed

    Ahn, Jungmo; Park, JaeYeon; Park, Donghwan; Paek, Jeongyeup; Ko, JeongGil

    2018-01-01

    With the introduction of various advanced deep learning algorithms, initiatives for image classification systems have transitioned over from traditional machine learning algorithms (e.g., SVM) to Convolutional Neural Networks (CNNs) using deep learning software tools. A prerequisite in applying CNN to real world applications is a system that collects meaningful and useful data. For such purposes, Wireless Image Sensor Networks (WISNs), that are capable of monitoring natural environment phenomena using tiny and low-power cameras on resource-limited embedded devices, can be considered as an effective means of data collection. However, with limited battery resources, sending high-resolution raw images to the backend server is a burdensome task that has direct impact on network lifetime. To address this problem, we propose an energy-efficient pre- and post- processing mechanism using image resizing and color quantization that can significantly reduce the amount of data transferred while maintaining the classification accuracy in the CNN at the backend server. We show that, if well designed, an image in its highly compressed form can be well-classified with a CNN model trained in advance using adequately compressed data. Our evaluation using a real image dataset shows that an embedded device can reduce the amount of transmitted data by ∼71% while maintaining a classification accuracy of ∼98%. Under the same conditions, this process naturally reduces energy consumption by ∼71% compared to a WISN that sends the original uncompressed images.

  8. Computer-aided diagnosis workstation and telemedicine network system for chest diagnosis based on multislice CT images

    NASA Astrophysics Data System (ADS)

    Satoh, Hitoshi; Niki, Noboru; Eguchi, Kenji; Ohmatsu, Hironobu; Kakinuma, Ryutaru; Moriyama, Noriyuki

    2009-02-01

    Mass screening based on multi-helical CT images requires a considerable number of images to be read. It is this time-consuming step that makes the use of helical CT for mass screening impractical at present. Moreover, the doctor who diagnoses a medical image is insufficient in Japan. To overcome these problems, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images, a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification and a vertebra body analysis algorithm for quantitative evaluation of osteoporosis likelihood by using helical CT scanner for the lung cancer mass screening. The functions to observe suspicious shadow in detail are provided in computer-aided diagnosis workstation with these screening algorithms. We also have developed the telemedicine network by using Web medical image conference system with the security improvement of images transmission, Biometric fingerprint authentication system and Biometric face authentication system. Biometric face authentication used on site of telemedicine makes "Encryption of file" and "Success in login" effective. As a result, patients' private information is protected. We can share the screen of Web medical image conference system from two or more web conference terminals at the same time. An opinion can be exchanged mutually by using a camera and a microphone that are connected with workstation. Based on these diagnostic assistance methods, we have developed a new computer-aided workstation and a new telemedicine network that can display suspected lesions three-dimensionally in a short time. The results of this study indicate that our radiological information system without film by using computer-aided diagnosis workstation and our telemedicine network system can increase diagnostic speed, diagnostic accuracy and security improvement of medical information.

  9. Active vision and image/video understanding with decision structures based on the network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2003-08-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. The ability of human brain to emulate knowledge structures in the form of networks-symbolic models is found. And that means an important shift of paradigm in our knowledge about brain from neural networks to "cortical software". Symbols, predicates and grammars naturally emerge in such active multilevel hierarchical networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type decision structure created via multilevel hierarchical compression of visual information. Mid-level vision processes like clustering, perceptual grouping, separation of figure from ground, are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models works similar to frames and agents, combines learning, classification, analogy together with higher-level model-based reasoning into a single framework. Such models do not require supercomputers. Based on such principles, and using methods of Computational intelligence, an Image Understanding system can convert images into the network-symbolic knowledge models, and effectively resolve uncertainty and ambiguity, providing unifying representation for perception and cognition. That allows creating new intelligent computer vision systems for robotic and defense industries.

  10. A high-speed network for cardiac image review.

    PubMed

    Elion, J L; Petrocelli, R R

    1994-01-01

    A high-speed fiber-based network for the transmission and display of digitized full-motion cardiac images has been developed. Based on Asynchronous Transfer Mode (ATM), the network is scaleable, meaning that the same software and hardware is used for a small local area network or for a large multi-institutional network. The system can handle uncompressed digital angiographic images, considered to be at the "high-end" of the bandwidth requirements. Along with the networking, a general-purpose multi-modality review station has been implemented without specialized hardware. This station can store a full injection sequence in "loop RAM" in a 512 x 512 format, then interpolate to 1024 x 1024 while displaying at 30 frames per second. The network and review stations connect to a central file server that uses a virtual file system to make a large high-speed RAID storage disk and associated off-line storage tapes and cartridges all appear as a single large file system to the software. In addition to supporting archival storage and review, the system can also digitize live video using high-speed Direct Memory Access (DMA) from the frame grabber to present uncompressed data to the network. Fully functional prototypes have provided the proof of concept, with full deployment in the institution planned as the next stage.

  11. A high-speed network for cardiac image review.

    PubMed Central

    Elion, J. L.; Petrocelli, R. R.

    1994-01-01

    A high-speed fiber-based network for the transmission and display of digitized full-motion cardiac images has been developed. Based on Asynchronous Transfer Mode (ATM), the network is scaleable, meaning that the same software and hardware is used for a small local area network or for a large multi-institutional network. The system can handle uncompressed digital angiographic images, considered to be at the "high-end" of the bandwidth requirements. Along with the networking, a general-purpose multi-modality review station has been implemented without specialized hardware. This station can store a full injection sequence in "loop RAM" in a 512 x 512 format, then interpolate to 1024 x 1024 while displaying at 30 frames per second. The network and review stations connect to a central file server that uses a virtual file system to make a large high-speed RAID storage disk and associated off-line storage tapes and cartridges all appear as a single large file system to the software. In addition to supporting archival storage and review, the system can also digitize live video using high-speed Direct Memory Access (DMA) from the frame grabber to present uncompressed data to the network. Fully functional prototypes have provided the proof of concept, with full deployment in the institution planned as the next stage. PMID:7949964

  12. Neural net target-tracking system using structured laser patterns

    NASA Astrophysics Data System (ADS)

    Cho, Jae-Wan; Lee, Yong-Bum; Lee, Nam-Ho; Park, Soon-Yong; Lee, Jongmin; Choi, Gapchu; Baek, Sunghyun; Park, Dong-Sun

    1996-06-01

    In this paper, we describe a robot endeffector tracking system using sensory information from recently-announced structured pattern laser diodes, which can generate images with several different types of structured pattern. The neural network approach is employed to recognize the robot endeffector covering the situation of three types of motion: translation, scaling and rotation. Features for the neural network to detect the position of the endeffector are extracted from the preprocessed images. Artificial neural networks are used to store models and to match with unknown input features recognizing the position of the robot endeffector. Since a minimal number of samples are used for different directions of the robot endeffector in the system, an artificial neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network trained with the back propagation learning is used to detect the position of the robot endeffector. Another feedforward neural network module is used to estimate the motion from a sequence of images and to control movements of the robot endeffector. COmbining the tow neural networks for recognizing the robot endeffector and estimating the motion with the preprocessing stage, the whole system keeps tracking of the robot endeffector effectively.

  13. Automated radial basis function neural network based image classification system for diabetic retinopathy detection in retinal images

    NASA Astrophysics Data System (ADS)

    Anitha, J.; Vijila, C. Kezi Selva; Hemanth, D. Jude

    2010-02-01

    Diabetic retinopathy (DR) is a chronic eye disease for which early detection is highly essential to avoid any fatal results. Image processing of retinal images emerge as a feasible tool for this early diagnosis. Digital image processing techniques involve image classification which is a significant technique to detect the abnormality in the eye. Various automated classification systems have been developed in the recent years but most of them lack high classification accuracy. Artificial neural networks are the widely preferred artificial intelligence technique since it yields superior results in terms of classification accuracy. In this work, Radial Basis function (RBF) neural network based bi-level classification system is proposed to differentiate abnormal DR Images and normal retinal images. The results are analyzed in terms of classification accuracy, sensitivity and specificity. A comparative analysis is performed with the results of the probabilistic classifier namely Bayesian classifier to show the superior nature of neural classifier. Experimental results show promising results for the neural classifier in terms of the performance measures.

  14. The key image and case log application: new radiology software for teaching file creation and case logging that incorporates elements of a social network.

    PubMed

    Rowe, Steven P; Siddiqui, Adeel; Bonekamp, David

    2014-07-01

    To create novel radiology key image software that is easy to use for novice users, incorporates elements adapted from social networking Web sites, facilitates resident and fellow education, and can serve as the engine for departmental sharing of interesting cases and follow-up studies. Using open-source programming languages and software, radiology key image software (the key image and case log application, KICLA) was developed. This system uses a lightweight interface with the institutional picture archiving and communications systems and enables the storage of key images, image series, and cine clips. It was designed to operate with minimal disruption to the radiologists' daily workflow. Many features of the user interface have been inspired by social networking Web sites, including image organization into private or public folders, flexible sharing with other users, and integration of departmental teaching files into the system. We also review the performance, usage, and acceptance of this novel system. KICLA was implemented at our institution and achieved widespread popularity among radiologists. A large number of key images have been transmitted to the system since it became available. After this early experience period, the most commonly encountered radiologic modalities are represented. A survey distributed to users revealed that most of the respondents found the system easy to use (89%) and fast at allowing them to record interesting cases (100%). Hundred percent of respondents also stated that they would recommend a system such as KICLA to their colleagues. The system described herein represents a significant upgrade to the Digital Imaging and Communications in Medicine teaching file paradigm with efforts made to maximize its ease of use and inclusion of characteristics inspired by social networking Web sites that allow the system additional functionality such as individual case logging. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  15. Providing image management and communication functionality as an integral part of an existing hospital information system

    NASA Astrophysics Data System (ADS)

    Dayhoff, Ruth E.; Maloney, Daniel L.

    1990-08-01

    The effective delivery of health care has become increasingly dependent on a wide range of medical data which includes a variety of images. Manual and computer-based medical records ordinarily do not contain image data, leaving the physician to deal with a fragmented patient record widely scattered throughout the hospital. The Department of Veterans Affairs (VA) is currently installing a prototype hospital information system (HIS) workstation network to demonstrate the feasibility of providing image management and communications (IMAC) functionality as an integral part of an existing hospital information system. The core of this system is a database management system adapted to handle images as a new data type. A general model for this integration is discussed and specifics of the hospital-wide network of image display workstations are given.

  16. Autonomous chemical and biological miniature wireless-sensor

    NASA Astrophysics Data System (ADS)

    Goldberg, Bar-Giora

    2005-05-01

    The presentation discusses a new concept and a paradigm shift in biological, chemical and explosive sensor system design and deployment. From large, heavy, centralized and expensive systems to distributed wireless sensor networks utilizing miniature platforms (nodes) that are lightweight, low cost and wirelessly connected. These new systems are possible due to the emergence and convergence of new innovative radio, imaging, networking and sensor technologies. Miniature integrated radio-sensor networks, is a technology whose time has come. These network systems are based on large numbers of distributed low cost and short-range wireless platforms that sense and process their environment and communicate data thru a network to a command center. The recent emergence of chemical and explosive sensor technology based on silicon nanostructures, coupled with the fast evolution of low-cost CMOS imagers, low power DSP engines and integrated radio chips, has created an opportunity to realize the vision of autonomous wireless networks. These threat detection networks will perform sophisticated analysis at the sensor node and convey alarm information up the command chain. Sensor networks of this type are expected to revolutionize the ability to detect and locate biological, chemical, or explosive threats. The ability to distribute large numbers of low-cost sensors over large areas enables these devices to be close to the targeted threats and therefore improve detection efficiencies and enable rapid counter responses. These sensor networks will be used for homeland security, shipping container monitoring, and other applications such as laboratory medical analysis, drug discovery, automotive, environmental and/or in-vivo monitoring. Avaak"s system concept is to image a chromatic biological, chemical and/or explosive sensor utilizing a digital imager, analyze the images and distribute alarm or image data wirelessly through the network. All the imaging, processing and communications would take place within the miniature, low cost distributed sensor platforms. This concept however presents a significant challenge due to a combination and convergence of required new technologies, as mentioned above. Passive biological and chemical sensors with very high sensitivity and which require no assaying are in development using a technique to optically and chemically encode silicon wafers with tailored nanostructures. The silicon wafer is patterned with nano-structures designed to change colors ad patterns when exposed to the target analytes (TICs, TIMs, VOC). A small video camera detects the color and pattern changes on the sensor. To determine if an alarm condition is present, an on board DSP processor, using specialized image processing algorithms and statistical analysis, determines if color gradient changes occurred on the sensor array. These sensors can detect several agents simultaneously. This system is currently under development by Avaak, with funding from DARPA through an SBIR grant.

  17. Intelligent image capture of cartridge cases for firearms examiners

    NASA Astrophysics Data System (ADS)

    Jones, Brett C.; Guerci, Joseph R.

    1997-02-01

    The FBI's DRUGFIRETM system is a nationwide computerized networked image database of ballistic forensic evidence. This evidence includes images of cartridge cases and bullets obtained from both crime scenes and controlled test firings of seized weapons. Currently, the system is installed in over 80 forensic labs across the country and has enjoyed a high degree of success. In this paper, we discuss some of the issues and methods associated with providing a front-end semi-automated image capture system that simultaneously satisfies the often conflicting criteria of the many human examiners visual perception versus the criteria associated with optimizing autonomous digital image correlation. Specifically, we detail the proposed processing chain of an intelligent image capture system (IICS), involving a real- time capture 'assistant,' which assesses the quality of the image under test utilizing a custom designed neural network.

  18. GLOBECOM '87 - Global Telecommunications Conference, Tokyo, Japan, Nov. 15-18, 1987, Conference Record. Volumes 1, 2, & 3

    NASA Astrophysics Data System (ADS)

    The present conference on global telecommunications discusses topics in the fields of Integrated Services Digital Network (ISDN) technology field trial planning and results to date, motion video coding, ISDN networking, future network communications security, flexible and intelligent voice/data networks, Asian and Pacific lightwave and radio systems, subscriber radio systems, the performance of distributed systems, signal processing theory, satellite communications modulation and coding, and terminals for the handicapped. Also discussed are knowledge-based technologies for communications systems, future satellite transmissions, high quality image services, novel digital signal processors, broadband network access interface, traffic engineering for ISDN design and planning, telecommunications software, coherent optical communications, multimedia terminal systems, advanced speed coding, portable and mobile radio communications, multi-Gbit/second lightwave transmission systems, enhanced capability digital terminals, communications network reliability, advanced antimultipath fading techniques, undersea lightwave transmission, image coding, modulation and synchronization, adaptive signal processing, integrated optical devices, VLSI technologies for ISDN, field performance of packet switching, CSMA protocols, optical transport system architectures for broadband ISDN, mobile satellite communications, indoor wireless communication, echo cancellation in communications, and distributed network algorithms.

  19. PSF estimation for defocus blurred image based on quantum back-propagation neural network

    NASA Astrophysics Data System (ADS)

    Gao, Kun; Zhang, Yan; Shao, Xiao-guang; Liu, Ying-hui; Ni, Guoqiang

    2010-11-01

    Images obtained by an aberration-free system are defocused blur due to motion in depth and/or zooming. The precondition of restoring the degraded image is to estimate point spread function (PSF) of the imaging system as precisely as possible. But it is difficult to identify the analytic model of PSF precisely due to the complexity of the degradation process. Inspired by the similarity between the quantum process and imaging process in the probability and statistics fields, one reformed multilayer quantum neural network (QNN) is proposed to estimate PSF of the defocus blurred image. Different from the conventional artificial neural network (ANN), an improved quantum neuron model is used in the hidden layer instead, which introduces a 2-bit controlled NOT quantum gate to control output and adopts 2 texture and edge features as the input vectors. The supervised back-propagation learning rule is adopted to train network based on training sets from the historical images. Test results show that this method owns excellent features of high precision and strong generalization ability.

  20. The use of global image characteristics for neural network pattern recognitions

    NASA Astrophysics Data System (ADS)

    Kulyas, Maksim O.; Kulyas, Oleg L.; Loshkarev, Aleksey S.

    2017-04-01

    The recognition system is observed, where the information is transferred by images of symbols generated by a television camera. For descriptors of objects the coefficients of two-dimensional Fourier transformation generated in a special way. For solution of the task of classification the one-layer neural network trained on reference images is used. Fast learning of a neural network with a single neuron calculation of coefficients is applied.

  1. Computer-aided diagnosis workstation and network system for chest diagnosis based on multislice CT images

    NASA Astrophysics Data System (ADS)

    Satoh, Hitoshi; Niki, Noboru; Eguchi, Kenji; Moriyama, Noriyuki; Ohmatsu, Hironobu; Masuda, Hideo; Machida, Suguru

    2008-03-01

    Mass screening based on multi-helical CT images requires a considerable number of images to be read. It is this time-consuming step that makes the use of helical CT for mass screening impractical at present. To overcome this problem, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images, a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification and a vertebra body analysis algorithm for quantitative evaluation of osteoporosis likelihood by using helical CT scanner for the lung cancer mass screening. The function to observe suspicious shadow in detail are provided in computer-aided diagnosis workstation with these screening algorithms. We also have developed the telemedicine network by using Web medical image conference system with the security improvement of images transmission, Biometric fingerprint authentication system and Biometric face authentication system. Biometric face authentication used on site of telemedicine makes "Encryption of file" and Success in login" effective. As a result, patients' private information is protected. Based on these diagnostic assistance methods, we have developed a new computer-aided workstation and a new telemedicine network that can display suspected lesions three-dimensionally in a short time. The results of this study indicate that our radiological information system without film by using computer-aided diagnosis workstation and our telemedicine network system can increase diagnostic speed, diagnostic accuracy and security improvement of medical information.

  2. Real-time visual communication to aid disaster recovery in a multi-segment hybrid wireless networking system

    NASA Astrophysics Data System (ADS)

    Al Hadhrami, Tawfik; Wang, Qi; Grecos, Christos

    2012-06-01

    When natural disasters or other large-scale incidents occur, obtaining accurate and timely information on the developing situation is vital to effective disaster recovery operations. High-quality video streams and high-resolution images, if available in real time, would provide an invaluable source of current situation reports to the incident management team. Meanwhile, a disaster often causes significant damage to the communications infrastructure. Therefore, another essential requirement for disaster management is the ability to rapidly deploy a flexible incident area communication network. Such a network would facilitate the transmission of real-time video streams and still images from the disrupted area to remote command and control locations. In this paper, a comprehensive end-to-end video/image transmission system between an incident area and a remote control centre is proposed and implemented, and its performance is experimentally investigated. In this study a hybrid multi-segment communication network is designed that seamlessly integrates terrestrial wireless mesh networks (WMNs), distributed wireless visual sensor networks, an airborne platform with video camera balloons, and a Digital Video Broadcasting- Satellite (DVB-S) system. By carefully integrating all of these rapidly deployable, interworking and collaborative networking technologies, we can fully exploit the joint benefits provided by WMNs, WSNs, balloon camera networks and DVB-S for real-time video streaming and image delivery in emergency situations among the disaster hit area, the remote control centre and the rescue teams in the field. The whole proposed system is implemented in a proven simulator. Through extensive simulations, the real-time visual communication performance of this integrated system has been numerically evaluated, towards a more in-depth understanding in supporting high-quality visual communications in such a demanding context.

  3. A Checklist for Successful Quantitative Live Cell Imaging in Systems Biology

    PubMed Central

    Sung, Myong-Hee

    2013-01-01

    Mathematical modeling of signaling and gene regulatory networks has provided unique insights about systems behaviors for many cell biological problems of medical importance. Quantitative single cell monitoring has a crucial role in advancing systems modeling of molecular networks. However, due to the multidisciplinary techniques that are necessary for adaptation of such systems biology approaches, dissemination to a wide research community has been relatively slow. In this essay, I focus on some technical aspects that are often under-appreciated, yet critical in harnessing live cell imaging methods to achieve single-cell-level understanding and quantitative modeling of molecular networks. The importance of these technical considerations will be elaborated with examples of successes and shortcomings. Future efforts will benefit by avoiding some pitfalls and by utilizing the lessons collectively learned from recent applications of imaging in systems biology. PMID:24709701

  4. An integrated multimedia medical information network system.

    PubMed

    Yamamoto, K; Makino, J; Sasagawa, N; Nagira, M

    1998-01-01

    An integrated multimedia medical information network system at Shimane Medical university has been developed to organize medical information generated from each section and provide information services useful for education, research and clinical practice. The report describes the outline of our system. It is designed to serve as a distributed database for electronic medical records and images. We are developing the MML engine that is to be linked to the world wide web (WWW) network system. To the users, this system will present an integrated multimedia representation of the patient records, providing access to both the image and text-based data required for an effective clinical decision making and medical education.

  5. A Neural Network Architecture For Rapid Model Indexing In Computer Vision Systems

    NASA Astrophysics Data System (ADS)

    Pawlicki, Ted

    1988-03-01

    Models of objects stored in memory have been shown to be useful for guiding the processing of computer vision systems. A major consideration in such systems, however, is how stored models are initially accessed and indexed by the system. As the number of stored models increases, the time required to search memory for the correct model becomes high. Parallel distributed, connectionist, neural networks' have been shown to have appealing content addressable memory properties. This paper discusses an architecture for efficient storage and reference of model memories stored as stable patterns of activity in a parallel, distributed, connectionist, neural network. The emergent properties of content addressability and resistance to noise are exploited to perform indexing of the appropriate object centered model from image centered primitives. The system consists of three network modules each of which represent information relative to a different frame of reference. The model memory network is a large state space vector where fields in the vector correspond to ordered component objects and relative, object based spatial relationships between the component objects. The component assertion network represents evidence about the existence of object primitives in the input image. It establishes local frames of reference for object primitives relative to the image based frame of reference. The spatial relationship constraint network is an intermediate representation which enables the association between the object based and the image based frames of reference. This intermediate level represents information about possible object orderings and establishes relative spatial relationships from the image based information in the component assertion network below. It is also constrained by the lawful object orderings in the model memory network above. The system design is consistent with current psychological theories of recognition by component. It also seems to support Marr's notions of hierarchical indexing. (i.e. the specificity, adjunct, and parent indices) It supports the notion that multiple canonical views of an object may have to be stored in memory to enable its efficient identification. The use of variable fields in the state space vectors appears to keep the number of required nodes in the network down to a tractable number while imposing a semantic value on different areas of the state space. This semantic imposition supports an interface between the analogical aspects of neural networks and the propositional paradigms of symbolic processing.

  6. Combining Image and Non-Image Data for Automatic Detection of Retina Disease in a Telemedicine Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aykac, Deniz; Chaum, Edward; Fox, Karen

    A telemedicine network with retina cameras and automated quality control, physiological feature location, and lesion/anomaly detection is a low-cost way of achieving broad-based screening for diabetic retinopathy (DR) and other eye diseases. In the process of a routine eye-screening examination, other non-image data is often available which may be useful in automated diagnosis of disease. In this work, we report on the results of combining this non-image data with image data, using the protocol and processing steps of a prototype system for automated disease diagnosis of retina examinations from a telemedicine network. The system includes quality assessments, automated physiology detection,more » and automated lesion detection to create an archive of known cases. Non-image data such as diabetes onset date and hemoglobin A1c (HgA1c) for each patient examination are included as well, and the system is used to create a content-based image retrieval engine capable of automated diagnosis of disease into 'normal' and 'abnormal' categories. The system achieves a sensitivity and specificity of 91.2% and 71.6% using hold-one-out validation testing.« less

  7. A flexible, open, decentralized system for digital pathology networks.

    PubMed

    Schuler, Robert; Smith, David E; Kumaraguruparan, Gowri; Chervenak, Ann; Lewis, Anne D; Hyde, Dallas M; Kesselman, Carl

    2012-01-01

    High-resolution digital imaging is enabling digital archiving and sharing of digitized microscopy slides and new methods for digital pathology. Collaborative research centers, outsourced medical services, and multi-site organizations stand to benefit from sharing pathology data in a digital pathology network. Yet significant technological challenges remain due to the large size and volume of digitized whole slide images. While information systems do exist for managing local pathology laboratories, they tend to be oriented toward narrow clinical use cases or offer closed ecosystems around proprietary formats. Few solutions exist for networking digital pathology operations. Here we present a system architecture and implementation of a digital pathology network and share results from a production system that federates major research centers.

  8. A Flexible, Open, Decentralized System for Digital Pathology Networks

    PubMed Central

    SMITH, David E.; KUMARAGURUPARAN, Gowri; CHERVENAK, Ann; LEWIS, Anne D.; HYDE, Dallas M.; KESSELMAN, Carl

    2014-01-01

    High-resolution digital imaging is enabling digital archiving and sharing of digitized microscopy slides and new methods for digital pathology. Collaborative research centers, outsourced medical services, and multi-site organizations stand to benefit from sharing pathology data in a digital pathology network. Yet significant technological challenges remain due to the large size and volume of digitized whole slide images. While information systems do exist for managing local pathology laboratories, they tend to be oriented toward narrow clinical use cases or offer closed ecosystems around proprietary formats. Few solutions exist for networking digital pathology operations. Here we present a system architecture and implementation of a digital pathology network and share results from a production system that federates major research centers. PMID:22941985

  9. Attenuation correction for brain PET imaging using deep neural network based on dixon and ZTE MR images.

    PubMed

    Gong, Kuang; Yang, Jaewon; Kim, Kyungsang; El Fakhri, Georges; Seo, Youngho; Li, Quanzheng

    2018-05-23

    Positron Emission Tomography (PET) is a functional imaging modality widely used in neuroscience studies. To obtain meaningful quantitative results from PET images, attenuation correction is necessary during image reconstruction. For PET/MR hybrid systems, PET attenuation is challenging as Magnetic Resonance (MR) images do not reflect attenuation coefficients directly. To address this issue, we present deep neural network methods to derive the continuous attenuation coefficients for brain PET imaging from MR images. With only Dixon MR images as the network input, the existing U-net structure was adopted and analysis using forty patient data sets shows it is superior than other Dixon based methods. When both Dixon and zero echo time (ZTE) images are available, we have proposed a modified U-net structure, named GroupU-net, to efficiently make use of both Dixon and ZTE information through group convolution modules when the network goes deeper. Quantitative analysis based on fourteen real patient data sets demonstrates that both network approaches can perform better than the standard methods, and the proposed network structure can further reduce the PET quantification error compared to the U-net structure. © 2018 Institute of Physics and Engineering in Medicine.

  10. Target recognition and scene interpretation in image/video understanding systems based on network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-08-01

    Vision is only a part of a system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. These mechanisms provide a reliable recognition if the object is occluded or cannot be recognized as a whole. It is hard to split the entire system apart, and reliable solutions to the target recognition problems are possible only within the solution of a more generic Image Understanding Problem. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, is the most feasible for such models. It converts visual information into relational Network-Symbolic structures, avoiding artificial precise computations of 3-dimensional models. Network-Symbolic Transformations derive abstract structures, which allows for invariant recognition of an object as exemplar of a class. Active vision helps creating consistent models. Attention, separation of figure from ground and perceptual grouping are special kinds of network-symbolic transformations. Such Image/Video Understanding Systems will be reliably recognizing targets.

  11. Improved automatic adjustment of density and contrast in FCR system using neural network

    NASA Astrophysics Data System (ADS)

    Takeo, Hideya; Nakajima, Nobuyoshi; Ishida, Masamitsu; Kato, Hisatoyo

    1994-05-01

    FCR system has an automatic adjustment of image density and contrast by analyzing the histogram of image data in the radiation field. Advanced image recognition methods proposed in this paper can improve the automatic adjustment performance, in which neural network technology is used. There are two methods. Both methods are basically used 3-layer neural network with back propagation. The image data are directly input to the input-layer in one method and the histogram data is input in the other method. The former is effective to the imaging menu such as shoulder joint in which the position of interest region occupied on the histogram changes by difference of positioning and the latter is effective to the imaging menu such as chest-pediatrics in which the histogram shape changes by difference of positioning. We experimentally confirm the validity of these methods (about the automatic adjustment performance) as compared with the conventional histogram analysis methods.

  12. System design and implementation of digital-image processing using computational grids

    NASA Astrophysics Data System (ADS)

    Shen, Zhanfeng; Luo, Jiancheng; Zhou, Chenghu; Huang, Guangyu; Ma, Weifeng; Ming, Dongping

    2005-06-01

    As a special type of digital image, remotely sensed images are playing increasingly important roles in our daily lives. Because of the enormous amounts of data involved, and the difficulties of data processing and transfer, an important issue for current computer and geo-science experts is developing internet technology to implement rapid remotely sensed image processing. Computational grids are able to solve this problem effectively. These networks of computer workstations enable the sharing of data and resources, and are used by computer experts to solve imbalances of network resources and lopsided usage. In China, computational grids combined with spatial-information-processing technology have formed a new technology: namely, spatial-information grids. In the field of remotely sensed images, spatial-information grids work more effectively for network computing, data processing, resource sharing, task cooperation and so on. This paper focuses mainly on the application of computational grids to digital-image processing. Firstly, we describe the architecture of digital-image processing on the basis of computational grids, its implementation is then discussed in detail with respect to the technology of middleware. The whole network-based intelligent image-processing system is evaluated on the basis of the experimental analysis of remotely sensed image-processing tasks; the results confirm the feasibility of the application of computational grids to digital-image processing.

  13. Detection of Pigment Networks in Dermoscopy Images

    NASA Astrophysics Data System (ADS)

    Eltayef, Khalid; Li, Yongmin; Liu, Xiaohui

    2017-02-01

    One of the most important structures in dermoscopy images is the pigment network, which is also one of the most challenging and fundamental task for dermatologists in early detection of melanoma. This paper presents an automatic system to detect pigment network from dermoscopy images. The design of the proposed algorithm consists of four stages. First, a pre-processing algorithm is carried out in order to remove the noise and improve the quality of the image. Second, a bank of directional filters and morphological connected component analysis are applied to detect the pigment networks. Third, features are extracted from the detected image, which can be used in the subsequent stage. Fourth, the classification process is performed by applying feed-forward neural network, in order to classify the region as either normal or abnormal skin. The method was tested on a dataset of 200 dermoscopy images from Hospital Pedro Hispano (Matosinhos), and better results were produced compared to previous studies.

  14. Networks for image acquisition, processing and display

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.

    1990-01-01

    The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.

  15. Automatic detection of diabetic retinopathy using an artificial neural network: a screening tool.

    PubMed

    Gardner, G G; Keating, D; Williamson, T H; Elliott, A T

    1996-11-01

    To determine if neural networks can detect diabetic features in fundus images and compare the network against an ophthalmologist screening a set of fundus images. 147 diabetic and 32 normal images were captured from a fundus camera, stored on computer, and analysed using a back propagation neural network. The network was trained to recognise features in the retinal image. The effects of digital filtering techniques and different network variables were assessed. 200 diabetic and 101 normal images were then randomised and used to evaluate the network's performance for the detection of diabetic retinopathy against an ophthalmologist. Detection rates for the recognition of vessels, exudates, and haemorrhages were 91.7%, 93.1%, and 73.8% respectively. When compared with the results of the ophthalmologist, the network achieved a sensitivity of 88.4% and a specificity of 83.5% for the detection of diabetic retinopathy. Detection of vessels, exudates, and haemorrhages was possible, with success rates dependent upon preprocessing and the number of images used in training. When compared with the ophthalmologist, the network achieved good accuracy for the detection of diabetic retinopathy. The system could be used as an aid to the screening of diabetic patients for retinopathy.

  16. On computer vision in wireless sensor networks.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Nina M.; Ko, Teresa H.

    Wireless sensor networks allow detailed sensing of otherwise unknown and inaccessible environments. While it would be beneficial to include cameras in a wireless sensor network because images are so rich in information, the power cost of transmitting an image across the wireless network can dramatically shorten the lifespan of the sensor nodes. This paper describe a new paradigm for the incorporation of imaging into wireless networks. Rather than focusing on transmitting images across the network, we show how an image can be processed locally for key features using simple detectors. Contrasted with traditional event detection systems that trigger an imagemore » capture, this enables a new class of sensors which uses a low power imaging sensor to detect a variety of visual cues. Sharing these features among relevant nodes cues specific actions to better provide information about the environment. We report on various existing techniques developed for traditional computer vision research which can aid in this work.« less

  17. Deep learning for medical image segmentation - using the IBM TrueNorth neurosynaptic system

    NASA Astrophysics Data System (ADS)

    Moran, Steven; Gaonkar, Bilwaj; Whitehead, William; Wolk, Aidan; Macyszyn, Luke; Iyer, Subramanian S.

    2018-03-01

    Deep convolutional neural networks have found success in semantic image segmentation tasks in computer vision and medical imaging. These algorithms are executed on conventional von Neumann processor architectures or GPUs. This is suboptimal. Neuromorphic processors that replicate the structure of the brain are better-suited to train and execute deep learning models for image segmentation by relying on massively-parallel processing. However, given that they closely emulate the human brain, on-chip hardware and digital memory limitations also constrain them. Adapting deep learning models to execute image segmentation tasks on such chips, requires specialized training and validation. In this work, we demonstrate for the first-time, spinal image segmentation performed using a deep learning network implemented on neuromorphic hardware of the IBM TrueNorth Neurosynaptic System and validate the performance of our network by comparing it to human-generated segmentations of spinal vertebrae and disks. To achieve this on neuromorphic hardware, the training model constrains the coefficients of individual neurons to {-1,0,1} using the Energy Efficient Deep Neuromorphic (EEDN)1 networks training algorithm. Given the 1 million neurons and 256 million synapses, the scale and size of the neural network implemented by the IBM TrueNorth allows us to execute the requisite mapping between segmented images and non-uniform intensity MR images >20 times faster than on a GPU-accelerated network and using <0.1 W. This speed and efficiency implies that a trained neuromorphic chip can be deployed in intra-operative environments where real-time medical image segmentation is necessary.

  18. Image acquisition unit for the Mayo/IBM PACS project

    NASA Astrophysics Data System (ADS)

    Reardon, Frank J.; Salutz, James R.

    1991-07-01

    The Mayo Clinic and IBM Rochester, Minnesota, have jointly developed a picture archiving, distribution and viewing system for use with Mayo's CT and MRI imaging modalities. Images are retrieved from the modalities and sent over the Mayo city-wide token ring network to optical storage subsystems for archiving, and to server subsystems for viewing on image review stations. Images may also be retrieved from archive and transmitted back to the modalities. The subsystems that interface to the modalities and communicate to the other components of the system are termed Image Acquisition Units (LAUs). The IAUs are IBM Personal System/2 (PS/2) computers with specially developed software. They operate independently in a network of cooperative subsystems and communicate with the modalities, archive subsystems, image review server subsystems, and a central subsystem that maintains information about the content and location of images. This paper provides a detailed description of the function and design of the Image Acquisition Units.

  19. Phase aided 3D imaging and modeling: dedicated systems and case studies

    NASA Astrophysics Data System (ADS)

    Yin, Yongkai; He, Dong; Liu, Zeyi; Liu, Xiaoli; Peng, Xiang

    2014-05-01

    Dedicated prototype systems for 3D imaging and modeling (3DIM) are presented. The 3D imaging systems are based on the principle of phase-aided active stereo, which have been developed in our laboratory over the past few years. The reported 3D imaging prototypes range from single 3D sensor to a kind of optical measurement network composed of multiple node 3D-sensors. To enable these 3D imaging systems, we briefly discuss the corresponding calibration techniques for both single sensor and multi-sensor optical measurement network, allowing good performance of the 3DIM prototype systems in terms of measurement accuracy and repeatability. Furthermore, two case studies including the generation of high quality color model of movable cultural heritage and photo booth from body scanning are presented to demonstrate our approach.

  20. Retinal Connectomics: Towards Complete, Accurate Networks

    PubMed Central

    Marc, Robert E.; Jones, Bryan W.; Watt, Carl B.; Anderson, James R.; Sigulinsky, Crystal; Lauritzen, Scott

    2013-01-01

    Connectomics is a strategy for mapping complex neural networks based on high-speed automated electron optical imaging, computational assembly of neural data volumes, web-based navigational tools to explore 1012–1015 byte (terabyte to petabyte) image volumes, and annotation and markup tools to convert images into rich networks with cellular metadata. These collections of network data and associated metadata, analyzed using tools from graph theory and classification theory, can be merged with classical systems theory, giving a more completely parameterized view of how biologic information processing systems are implemented in retina and brain. Networks have two separable features: topology and connection attributes. The first findings from connectomics strongly validate the idea that the topologies complete retinal networks are far more complex than the simple schematics that emerged from classical anatomy. In particular, connectomics has permitted an aggressive refactoring of the retinal inner plexiform layer, demonstrating that network function cannot be simply inferred from stratification; exposing the complex geometric rules for inserting different cells into a shared network; revealing unexpected bidirectional signaling pathways between mammalian rod and cone systems; documenting selective feedforward systems, novel candidate signaling architectures, new coupling motifs, and the highly complex architecture of the mammalian AII amacrine cell. This is but the beginning, as the underlying principles of connectomics are readily transferrable to non-neural cell complexes and provide new contexts for assessing intercellular communication. PMID:24016532

  1. A New Approach to Create Image Control Networks in ISIS

    NASA Astrophysics Data System (ADS)

    Becker, K. J.; Berry, K. L.; Mapel, J. A.; Walldren, J. C.

    2017-06-01

    A new approach was used to create a feature-based control point network that required the development of new tools in the Integrated Software for Imagers and Spectrometers (ISIS3) system to process very large datasets.

  2. Modeling And Simulation Of Multimedia Communication Networks

    NASA Astrophysics Data System (ADS)

    Vallee, Richard; Orozco-Barbosa, Luis; Georganas, Nicolas D.

    1989-05-01

    In this paper, we present a simulation study of a browsing system involving radiological image servers. The proposed IEEE 802.6 DQDB MAN standard is designated as the computer network to transfer radiological images from file servers to medical workstations, and to simultaneously support real time voice communications. Storage and transmission of original raster scanned images and images compressed according to pyramid data structures are considered. Different types of browsing as well as various image sizes and bit rates in the DQDB MAN are also compared. The elapsed time, measured from the time an image request is issued until the image is displayed on the monitor, is the parameter considered to evaluate the system performance. Simulation results show that image browsing can be supported by the DQDB MAN.

  3. Architecture of the parallel hierarchical network for fast image recognition

    NASA Astrophysics Data System (ADS)

    Timchenko, Leonid; Wójcik, Waldemar; Kokriatskaia, Natalia; Kutaev, Yuriy; Ivasyuk, Igor; Kotyra, Andrzej; Smailova, Saule

    2016-09-01

    Multistage integration of visual information in the brain allows humans to respond quickly to most significant stimuli while maintaining their ability to recognize small details in the image. Implementation of this principle in technical systems can lead to more efficient processing procedures. The multistage approach to image processing includes main types of cortical multistage convergence. The input images are mapped into a flexible hierarchy that reflects complexity of image data. Procedures of the temporal image decomposition and hierarchy formation are described in mathematical expressions. The multistage system highlights spatial regularities, which are passed through a number of transformational levels to generate a coded representation of the image that encapsulates a structure on different hierarchical levels in the image. At each processing stage a single output result is computed to allow a quick response of the system. The result is presented as an activity pattern, which can be compared with previously computed patterns on the basis of the closest match. With regard to the forecasting method, its idea lies in the following. In the results synchronization block, network-processed data arrive to the database where a sample of most correlated data is drawn using service parameters of the parallel-hierarchical network.

  4. Autonomous vision networking: miniature wireless sensor networks with imaging technology

    NASA Astrophysics Data System (ADS)

    Messinger, Gioia; Goldberg, Giora

    2006-09-01

    The recent emergence of integrated PicoRadio technology, the rise of low power, low cost, System-On-Chip (SOC) CMOS imagers, coupled with the fast evolution of networking protocols and digital signal processing (DSP), created a unique opportunity to achieve the goal of deploying large-scale, low cost, intelligent, ultra-low power distributed wireless sensor networks for the visualization of the environment. Of all sensors, vision is the most desired, but its applications in distributed sensor networks have been elusive so far. Not any more. The practicality and viability of ultra-low power vision networking has been proven and its applications are countless, from security, and chemical analysis to industrial monitoring, asset tracking and visual recognition, vision networking represents a truly disruptive technology applicable to many industries. The presentation discusses some of the critical components and technologies necessary to make these networks and products affordable and ubiquitous - specifically PicoRadios, CMOS imagers, imaging DSP, networking and overall wireless sensor network (WSN) system concepts. The paradigm shift, from large, centralized and expensive sensor platforms, to small, low cost, distributed, sensor networks, is possible due to the emergence and convergence of a few innovative technologies. Avaak has developed a vision network that is aided by other sensors such as motion, acoustic and magnetic, and plans to deploy it for use in military and commercial applications. In comparison to other sensors, imagers produce large data files that require pre-processing and a certain level of compression before these are transmitted to a network server, in order to minimize the load on the network. Some of the most innovative chemical detectors currently in development are based on sensors that change color or pattern in the presence of the desired analytes. These changes are easily recorded and analyzed by a CMOS imager and an on-board DSP processor. Image processing at the sensor node level may also be required for applications in security, asset management and process control. Due to the data bandwidth requirements posed on the network by video sensors, new networking protocols or video extensions to existing standards (e.g. Zigbee) are required. To this end, Avaak has designed and implemented an ultra-low power networking protocol designed to carry large volumes of data through the network. The low power wireless sensor nodes that will be discussed include a chemical sensor integrated with a CMOS digital camera, a controller, a DSP processor and a radio communication transceiver, which enables relaying of an alarm or image message, to a central station. In addition to the communications, identification is very desirable; hence location awareness will be later incorporated to the system in the form of Time-Of-Arrival triangulation, via wide band signaling. While the wireless imaging kernel already exists specific applications for surveillance and chemical detection are under development by Avaak, as part of a co-founded program from ONR and DARPA. Avaak is also designing vision networks for commercial applications - some of which are undergoing initial field tests.

  5. Simultaneous submicrometric 3D imaging of the micro-vascular network and the neuronal system in a mouse spinal cord

    PubMed Central

    Fratini, Michela; Bukreeva, Inna; Campi, Gaetano; Brun, Francesco; Tromba, Giuliana; Modregger, Peter; Bucci, Domenico; Battaglia, Giuseppe; Spanò, Raffaele; Mastrogiacomo, Maddalena; Requardt, Herwig; Giove, Federico; Bravin, Alberto; Cedola, Alessia

    2015-01-01

    Faults in vascular (VN) and neuronal networks of spinal cord are responsible for serious neurodegenerative pathologies. Because of inadequate investigation tools, the lacking knowledge of the complete fine structure of VN and neuronal system represents a crucial problem. Conventional 2D imaging yields incomplete spatial coverage leading to possible data misinterpretation, whereas standard 3D computed tomography imaging achieves insufficient resolution and contrast. We show that X-ray high-resolution phase-contrast tomography allows the simultaneous visualization of three-dimensional VN and neuronal systems of ex-vivo mouse spinal cord at scales spanning from millimeters to hundreds of nanometers, with nor contrast agent nor sectioning and neither destructive sample-preparation. We image both the 3D distribution of micro-capillary network and the micrometric nerve fibers, axon-bundles and neuron soma. Our approach is very suitable for pre-clinical investigation of neurodegenerative pathologies and spinal-cord-injuries, in particular to resolve the entangled relationship between VN and neuronal system. PMID:25686728

  6. Disaster-hardened imaging POD for PACS

    NASA Astrophysics Data System (ADS)

    Honeyman-Buck, Janice; Frost, Meryll

    2005-04-01

    After the events of 9/11, many people questioned their ability to keep critical services operational in the face of massive infrastructure failure. Hospitals increased their backup and recovery power, made plans for emergency water and food, and operated on a heightened alert awareness with more frequent disaster drills. In a film-based radiology department, if a portable X-ray unit, a CT unit, an Ultrasound unit, and an film processor could be operated on emergency power, a limited, but effective number of studies could be performed. However, in a digital department, there is a reliance on the network infrastructure to deliver images to viewing locations. The system developed for our institution uses several imaging PODS, a name we chose because it implied to us a safe, contained environment. Each POD is a stand-alone emergency powered network capable of generating images and displaying them in the POD or printing them to a DICOM printer. The technology we used to create a POD consists of a computer with dual network interface cards joining our private, local POD network, to the hospital network. In the case of an infrastructure failure, each POD can and does work independently to produce CTs, CRs, and Ultrasounds. The system has been tested during disaster drills and works correctly, producing images using equipment technologists are comfortable using with very few emergency switch-over tasks. Purpose: To provide imaging capabilities in the event of a natural or man-made disaster with infrastructure failure. Method: After the events of 9/11, many people questioned their ability to keep critical services operational in the face of massive infrastructure failure. Hospitals increased their backup and recovery power, made plans for emergency water and food, and operated on a heightened alert awareness with more frequent disaster drills. In a film-based radiology department, if a portable X-ray unit, a CT unit, an Ultrasound unit, and an film processor could be operated on emergency power, a limited, but effective number of studies could be performed. However, in a digital department, there is a reliance on the network infrastructure to deliver images to viewing locations. The system developed for our institution uses several imaging PODS, a name we chose because it implied to us a safe, contained environment. Each POD is on both the standard and the emergency power systems. All the vendor equipment that produces images is on a private, stand-alone network controlled either by a simple or a managed switch. Included in each POD is a dry-process DICOM printer that is rarely used during normal operations and a display workstation. One node on the private network is a PACS application processor (AP) with two network interface cards, one for the private network, one for the standard PACS network. During ordinary daily operations, all acquired images pass through this AP and are routed to the PACS archives, web servers, and workstations. However, if the power and network to much of the hospital were to fail, the stand-alone POD could still function. Images are routed to the AP, but cannot forward to the main network. However, they can be routed to the printer and display in the POD. They are also stored on the AP to continue normal routing when the infrastructure is restored. Results: The imaging PODS have been tested in actual disaster testing where the infrastructure was intentionally removed and worked as designed. To date, we have not had to use them in a real-life scenario and we hope we never do, but we feel we have a reasonable level of emergency imaging capability if we ever need it. Conclusions: Our testing indicates our PODS are a viable way to continue medical imaging in the face of an emergency with a major part of our network and electrical infrastructure destroyed.

  7. Design and clinical evaluation of a high-capacity digital image archival library and high-speed network for the replacement of cinefilm in the cardiac angiography environment

    NASA Astrophysics Data System (ADS)

    Cusma, Jack T.; Spero, Laurence A.; Groshong, Bennett R.; Cho, Teddy; Bashore, Thomas M.

    1993-09-01

    An economical and practical digital solution for the replacement of 35 mm cine film as the archive media in the cardiac x-ray imaging environment has remained lacking to date due to the demanding requirements of high capacity, high acquisition rate, high transfer rate, and a need for application in a distributed environment. A clinical digital image library and network based on the D2 digital video format has been installed in the Duke University Cardiac Catheterization Laboratory. The system architecture includes a central image library with digital video recorders and robotic tape retrieval, three acquisition stations, and remote review stations connected via a serial image network. The library has a capacity for over 20,000 Gigabytes of uncompressed image data, equivalent to records for approximately 20,000 patients. Image acquisition in the clinical laboratories is via a real-time digital interface between the digital angiography system and a local digital recorder. Images are transferred to the library over the serial network at a rate of 14.3 Mbytes/sec and permanently stored for later review. The image library and network are currently undergoing a clinical comparison with cine film for visual and quantitative assessment of coronary artery disease. At the conclusion of the evaluation, the configuration will be expanded to include four additional catheterization laboratories and remote review stations throughout the hospital.

  8. Noise reduction and image enhancement using a hardware implementation of artificial neural networks

    NASA Astrophysics Data System (ADS)

    David, Robert; Williams, Erin; de Tremiolles, Ghislain; Tannhof, Pascal

    1999-03-01

    In this paper, we present a neural based solution developed for noise reduction and image enhancement using the ZISC, an IBM hardware processor which implements the Restricted Coulomb Energy algorithm and the K-Nearest Neighbor algorithm. Artificial neural networks present the advantages of processing time reduction in comparison with classical models, adaptability, and the weighted property of pattern learning. The goal of the developed application is image enhancement in order to restore old movies (noise reduction, focus correction, etc.), to improve digital television images, or to treat images which require adaptive processing (medical images, spatial images, special effects, etc.). Image results show a quantitative improvement over the noisy image as well as the efficiency of this system. Further enhancements are being examined to improve the output of the system.

  9. Enterprise-scale image distribution with a Web PACS.

    PubMed

    Gropper, A; Doyle, S; Dreyer, K

    1998-08-01

    The integration of images with existing and new health care information systems poses a number of challenges in a multi-facility network: image distribution to clinicians; making DICOM image headers consistent across information systems; and integration of teleradiology into PACS. A novel, Web-based enterprise PACS architecture introduced at Massachusetts General Hospital provides a solution. Four AMICAS Web/Intranet Image Servers were installed as the default DICOM destination of 10 digital modalities. A fifth AMICAS receives teleradiology studies via the Internet. Each AMICAS includes: a Java-based interface to the IDXrad radiology information system (RIS), a DICOM autorouter to tape-library archives and to the Agfa PACS, a wavelet image compressor/decompressor that preserves compatibility with DICOM workstations, a Web server to distribute images throughout the enterprise, and an extensible interface which permits links between other HIS and AMICAS. Using wavelet compression and Internet standards as its native formats, AMICAS creates a bridge to the DICOM networks of remote imaging centers via the Internet. This teleradiology capability is integrated into the DICOM network and the PACS thereby eliminating the need for special teleradiology workstations. AMICAS has been installed at MGH since March of 1997. During that time, it has been a reliable component of the evolving digital image distribution system. As a result, the recently renovated neurosurgical ICU will be filmless and use only AMICAS workstations for mission-critical patient care.

  10. Neural network-based system for pattern recognition through a fiber optic bundle

    NASA Astrophysics Data System (ADS)

    Gamo-Aranda, Javier; Rodriguez-Horche, Paloma; Merchan-Palacios, Miguel; Rosales-Herrera, Pablo; Rodriguez, M.

    2001-04-01

    A neural network based system to identify images transmitted through a Coherent Fiber-optic Bundle (CFB) is presented. Patterns are generated in a computer, displayed on a Spatial Light Modulator, imaged onto the input face of the CFB, and recovered optically by a CCD sensor array for further processing. Input and output optical subsystems were designed and used to that end. The recognition step of the transmitted patterns is made by a powerful, widely-used, neural network simulator running on the control PC. A complete PC-based interface was developed to control the different tasks involved in the system. An optical analysis of the system capabilities was carried out prior to performing the recognition step. Several neural network topologies were tested, and the corresponding numerical results are also presented and discussed.

  11. A Complete Image Management and Communications Network for the Neuroradiology Service at Georgetown University Hospital

    NASA Astrophysics Data System (ADS)

    Horii, Steven C.; Muraki, Alan; Mallon-Ingeholm, Mary L.; Mun, Seong K.; Clark, Letitia; Schellinger, Dieter

    1989-05-01

    A complete image management and communications system has been installed at Georgetown University Hospital (GUH). The network is based on the A T & T CommView System. In the Neuroradiology Division, this comprehensive network supports a multiscreen workstation with access to multiple imaging modalities such as CT and MRI from both the hospital and a remote imaging center. In addition, the radiologist can access these images from various workstations located throughout the hospital as well as from remote sites such as the home. Among the radiology services supported by the network, neuroradiology has the greatest need for such a system with extensive daily requirements involving the remote imaging center and on-line consultation around the clock. By providing neuroradiology with all available communication links, the radiologist can monitor, diagnose, and consult. The remote site has a subsystem capable of acquiring images and transmitting them over a high speed T1 data circuit. The GUH neuroradiologist can view these images on the neuro workstation or any of the workstations available in the Hospital. Fast and easy access to the images allows a radiologist to monitor multiple examinations as well as to utilize the workstation for diagnosis. To provide the neuroradiologist quick access to images at all times, a PC-based Results Viewing Station (RVS) has been placed in a doctor's home. Images may be sent to the RVS, or the user may request images from the central database at the hospital. Images can be viewed at home either as they are transmitted, or following transfer of a whole study. The efficiency and effectiveness of the system's capabilities with special regard to remote and teleradiology (RVS) operations have been studied for the neuroradiology service. This paper will discuss the current clinical acceptance and use, problems in implementation, and ways these difficulties are being surmounted.

  12. Retina Image Analysis and Ocular Telehealth: The Oak Ridge National Laboratory-Hamilton Eye Institute Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karnowski, Thomas Paul; Giancardo, Luca; Li, Yaquin

    2013-01-01

    Automated retina image analysis has reached a high level of maturity in recent years, and thus the question of how validation is performed in these systems is beginning to grow in importance. One application of retina image analysis is in telemedicine, where an automated system could enable the automated detection of diabetic retinopathy and other eye diseases as a low-cost method for broad-based screening. In this work we discuss our experiences in developing a telemedical network for retina image analysis, including our progression from a manual diagnosis network to a more fully automated one. We pay special attention to howmore » validations of our algorithm steps are performed, both using data from the telemedicine network and other public databases.« less

  13. Informatics methods to enable sharing of quantitative imaging research data.

    PubMed

    Levy, Mia A; Freymann, John B; Kirby, Justin S; Fedorov, Andriy; Fennessy, Fiona M; Eschrich, Steven A; Berglund, Anders E; Fenstermacher, David A; Tan, Yongqiang; Guo, Xiaotao; Casavant, Thomas L; Brown, Bartley J; Braun, Terry A; Dekker, Andre; Roelofs, Erik; Mountz, James M; Boada, Fernando; Laymon, Charles; Oborski, Matt; Rubin, Daniel L

    2012-11-01

    The National Cancer Institute Quantitative Research Network (QIN) is a collaborative research network whose goal is to share data, algorithms and research tools to accelerate quantitative imaging research. A challenge is the variability in tools and analysis platforms used in quantitative imaging. Our goal was to understand the extent of this variation and to develop an approach to enable sharing data and to promote reuse of quantitative imaging data in the community. We performed a survey of the current tools in use by the QIN member sites for representation and storage of their QIN research data including images, image meta-data and clinical data. We identified existing systems and standards for data sharing and their gaps for the QIN use case. We then proposed a system architecture to enable data sharing and collaborative experimentation within the QIN. There are a variety of tools currently used by each QIN institution. We developed a general information system architecture to support the QIN goals. We also describe the remaining architecture gaps we are developing to enable members to share research images and image meta-data across the network. As a research network, the QIN will stimulate quantitative imaging research by pooling data, algorithms and research tools. However, there are gaps in current functional requirements that will need to be met by future informatics development. Special attention must be given to the technical requirements needed to translate these methods into the clinical research workflow to enable validation and qualification of these novel imaging biomarkers. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Neural networks: Application to medical imaging

    NASA Technical Reports Server (NTRS)

    Clarke, Laurence P.

    1994-01-01

    The research mission is the development of computer assisted diagnostic (CAD) methods for improved diagnosis of medical images including digital x-ray sensors and tomographic imaging modalities. The CAD algorithms include advanced methods for adaptive nonlinear filters for image noise suppression, hybrid wavelet methods for feature segmentation and enhancement, and high convergence neural networks for feature detection and VLSI implementation of neural networks for real time analysis. Other missions include (1) implementation of CAD methods on hospital based picture archiving computer systems (PACS) and information networks for central and remote diagnosis and (2) collaboration with defense and medical industry, NASA, and federal laboratories in the area of dual use technology conversion from defense or aerospace to medicine.

  15. Health Insurance Portability and Accountability Act-Compliant Ocular Telehealth Network for the Remote Diagnosis and Management of Diabetic Retinopathy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yaquin; Karnowski, Thomas Paul; Tobin Jr, Kenneth William

    2011-01-01

    In this article, we present the design and implementation of a regional ocular telehealth network for remote assessment and management of diabetic retinopathy (DR), including the design requirements, network topology, protocol design, system work flow, graphics user interfaces, and performance evaluation. The Telemedical Retinal Image Analysis and Diagnosis Network is a computer-aided, image analysis telehealth paradigm for the diagnosis of DR and other retinal diseases using fundus images acquired from primary care end users delivering care to underserved patient populations in the mid-South and southeastern United States.

  16. A health insurance portability and accountability act-compliant ocular telehealth network for the remote diagnosis and management of diabetic retinopathy.

    PubMed

    Li, Yaqin; Karnowski, Thomas P; Tobin, Kenneth W; Giancardo, Luca; Morris, Scott; Sparrow, Sylvia E; Garg, Seema; Fox, Karen; Chaum, Edward

    2011-10-01

    In this article, we present the design and implementation of a regional ocular telehealth network for remote assessment and management of diabetic retinopathy (DR), including the design requirements, network topology, protocol design, system work flow, graphics user interfaces, and performance evaluation. The Telemedical Retinal Image Analysis and Diagnosis Network is a computer-aided, image analysis telehealth paradigm for the diagnosis of DR and other retinal diseases using fundus images acquired from primary care end users delivering care to underserved patient populations in the mid-South and southeastern United States.

  17. Intellectual system for images restoration

    NASA Astrophysics Data System (ADS)

    Mardare, Igor

    2005-02-01

    Intelligence systems on basis of artificial neural networks and associative memory allow to solve effectively problems of recognition and restoration of images. However, within analytical technologies there are no dominating approaches of deciding of intellectual problems. Choice of the best technology depends on nature of problem, features of objects, volume of represented information about the object, number of classes of objects, etc. It is required to determine opportunities, preconditions and field of application of neural networks and associative memory for decision of problem of restoration of images and to use their supplementary benefits for further development of intelligence systems.

  18. Image and Video Compression with VLSI Neural Networks

    NASA Technical Reports Server (NTRS)

    Fang, W.; Sheu, B.

    1993-01-01

    An advanced motion-compensated predictive video compression system based on artificial neural networks has been developed to effectively eliminate the temporal and spatial redundancy of video image sequences and thus reduce the bandwidth and storage required for the transmission and recording of the video signal. The VLSI neuroprocessor for high-speed high-ratio image compression based upon a self-organization network and the conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results.

  19. ROS-IGTL-Bridge: an open network interface for image-guided therapy using the ROS environment.

    PubMed

    Frank, Tobias; Krieger, Axel; Leonard, Simon; Patel, Niravkumar A; Tokuda, Junichi

    2017-08-01

    With the growing interest in advanced image-guidance for surgical robot systems, rapid integration and testing of robotic devices and medical image computing software are becoming essential in the research and development. Maximizing the use of existing engineering resources built on widely accepted platforms in different fields, such as robot operating system (ROS) in robotics and 3D Slicer in medical image computing could simplify these tasks. We propose a new open network bridge interface integrated in ROS to ensure seamless cross-platform data sharing. A ROS node named ROS-IGTL-Bridge was implemented. It establishes a TCP/IP network connection between the ROS environment and external medical image computing software using the OpenIGTLink protocol. The node exports ROS messages to the external software over the network and vice versa simultaneously, allowing seamless and transparent data sharing between the ROS-based devices and the medical image computing platforms. Performance tests demonstrated that the bridge could stream transforms, strings, points, and images at 30 fps in both directions successfully. The data transfer latency was <1.2 ms for transforms, strings and points, and 25.2 ms for color VGA images. A separate test also demonstrated that the bridge could achieve 900 fps for transforms. Additionally, the bridge was demonstrated in two representative systems: a mock image-guided surgical robot setup consisting of 3D slicer, and Lego Mindstorms with ROS as a prototyping and educational platform for IGT research; and the smart tissue autonomous robot surgical setup with 3D Slicer. The study demonstrated that the bridge enabled cross-platform data sharing between ROS and medical image computing software. This will allow rapid and seamless integration of advanced image-based planning/navigation offered by the medical image computing software such as 3D Slicer into ROS-based surgical robot systems.

  20. Advanced Medical Technology and Network Systems Research.

    DTIC Science & Technology

    1999-09-01

    for image-guided therapies . Advanced technologies included in this report are impedance imaging and a palpation training system. 14. SUBJECT...Summary 1 Virtual Clinic for Patients with Chronic Illness Project Planning Document • 2 Telemedicine for Hemodialysis 21 A...imaging systems and’ surgical procedures effort is accomplished in part by establishing the technology requirements for image-guided therapies . Advanced

  1. An Investigation of the Application of Artificial Neural Networks to Adaptive Optics Imaging Systems

    DTIC Science & Technology

    1991-12-01

    neural network and the feedforward neural network studied is the single layer perceptron artificial neural network . The recurrent artificial neural network input...features are the wavefront sensor slope outputs and neighboring actuator feedback commands. The feedforward artificial neural network input

  2. Integration Of An MR Image Network Into A Clinical PACS

    NASA Astrophysics Data System (ADS)

    Ratib, Osman M.; Mankovich, Nicholas J.; Taira, Ricky K.; Cho, Paul S.; Huang, H. K.

    1988-06-01

    A direct link between a clinical pediatric PACS module and a FONAR MRI image network was implemented. The original MR network combines together the MR scanner, a remote viewing station and a central archiving station. The pediatric PACS directly connects to the archiving unit through an Ethernet TCP-IP network adhering to FONAR's protocol. The PACS communication software developed supports the transfer of patient studies and the patient information directly from the MR archive database to the pediatric PACS. In the first phase of our project we developed a package to transfer data between a VAX-111750 and the IBM PC I AT-based MR archive database through the Ethernet network. This system served as a model for PACS-to-modality network communication. Once testing was complete on this research network, the software and network hardware was moved to the clinical pediatric VAX for full PACS integration. In parallel to the direct transmission of digital images to the Pediatric PACS, a broadband communication system in video format was developed for real-time broadcasting of images originating from the MR console to 8 remote viewing stations distributed in the radiology department. These analog viewing stations allow the radiologists to directly monitor patient positioning and to select the scan levels during a patient examination from remote locations in the radiology department. This paper reports (1) the technical details of this implementation, (2) the merits of this network development scheme, and (3) the performance statistics of the network-to-PACS interface.

  3. Cancer diagnostics using neural network sorting of processed images

    NASA Astrophysics Data System (ADS)

    Wyman, Charles L.; Schreeder, Marshall; Grundy, Walt; Kinser, Jason M.

    1996-03-01

    A combination of image processing with neural network sorting was conducted to demonstrate feasibility of automated cervical smear screening. Nuclei were isolated to generate a series of data points relating to the density and size of individual nuclei. This was followed by segmentation to isolate entire cells for subsequent generation of data points to bound the size of the cytoplasm. Data points were taken on as many as ten cells per image frame and included correlation against a series of filters providing size and density readings on nuclei. Additional point data was taken on nuclei images to refine size information and on whole cells to bound the size of the cytoplasm, twenty data points per assessed cell were generated. These data point sets, designated as neural tensors, comprise the inputs for training and use of a unique neural network to sort the images and identify those indicating evidence of disease. The neural network, named the Fast Analog Associative Memory, accumulates data and establishes lookup tables for comparison against images to be assessed. Six networks were trained to differentiate normal cells from those evidencing various levels abnormality that may lead to cancer. A blind test was conducted on 77 images to evaluate system performance. The image set included 31 positives (diseased) and 46 negatives (normal). Our system correctly identified all 31 positives and 41 of the negatives with 5 false positives. We believe this technology can lead to more efficient automated screening of cervical smears.

  4. Performance of asynchronous transfer mode (ATM) local area and wide area networks for medical imaging transmission in clinical environment.

    PubMed

    Huang, H K; Wong, A W; Zhu, X

    1997-01-01

    Asynchronous transfer mode (ATM) technology emerges as a leading candidate for medical image transmission in both local area network (LAN) and wide area network (WAN) applications. This paper describes the performance of an ATM LAN and WAN network at the University of California, San Francisco. The measurements were obtained using an intensive care unit (ICU) server connecting to four image workstations (WS) at four different locations of a hospital-integrated picture archiving and communication system (HI-PACS) in a daily regular clinical environment. Four types of performance were evaluated: magnetic disk-to-disk, disk-to-redundant array of inexpensive disks (RAID), RAID-to-memory, and memory-to-memory. Results demonstrate that the transmission rate between two workstations can reach 5-6 Mbytes/s from RAID-to-memory, and 8-10 Mbytes/s from memory-to-memory. When the server has to send images to all four workstations simultaneously, the transmission rate to each WS is about 4 Mbytes/s. Both situations are adequate for radiologic image communications for picture archiving and communication systems (PACS) and teleradiology applications.

  5. An object-based storage model for distributed remote sensing images

    NASA Astrophysics Data System (ADS)

    Yu, Zhanwu; Li, Zhongmin; Zheng, Sheng

    2006-10-01

    It is very difficult to design an integrated storage solution for distributed remote sensing images to offer high performance network storage services and secure data sharing across platforms using current network storage models such as direct attached storage, network attached storage and storage area network. Object-based storage, as new generation network storage technology emerged recently, separates the data path, the control path and the management path, which solves the bottleneck problem of metadata existed in traditional storage models, and has the characteristics of parallel data access, data sharing across platforms, intelligence of storage devices and security of data access. We use the object-based storage in the storage management of remote sensing images to construct an object-based storage model for distributed remote sensing images. In the storage model, remote sensing images are organized as remote sensing objects stored in the object-based storage devices. According to the storage model, we present the architecture of a distributed remote sensing images application system based on object-based storage, and give some test results about the write performance comparison of traditional network storage model and object-based storage model.

  6. System and method for generating a relationship network

    DOEpatents

    Franks, Kasian; Myers, Cornelia A; Podowski, Raf M

    2015-05-05

    A computer-implemented system and process for generating a relationship network is disclosed. The system provides a set of data items to be related and generates variable length data vectors to represent the relationships between the terms within each data item. The system can be used to generate a relationship network for documents, images, or any other type of file. This relationship network can then be queried to discover the relationships between terms within the set of data items.

  7. System and method for generating a relationship network

    DOEpatents

    Franks, Kasian [Kensington, CA; Myers, Cornelia A [St. Louis, MO; Podowski, Raf M [Pleasant Hill, CA

    2011-07-26

    A computer-implemented system and process for generating a relationship network is disclosed. The system provides a set of data items to be related and generates variable length data vectors to represent the relationships between the terms within each data item. The system can be used to generate a relationship network for documents, images, or any other type of file. This relationship network can then be queried to discover the relationships between terms within the set of data items.

  8. Teleradiology network system and computer-aided diagnosis workstation using the web medical image conference system with a new information security solution

    NASA Astrophysics Data System (ADS)

    Satoh, Hitoshi; Niki, Noboru; Eguchi, Kenji; Ohmatsu, Hironobu; Kaneko, Masahiro; Kakinuma, Ryutaru; Moriyama, Noriyuki

    2011-03-01

    We have developed the teleradiology network system with a new information security solution that provided with web medical image conference system. In the teleradiology network system, the security of information network is very important subjects. We are studying the secret sharing scheme as a method safely to store or to transmit the confidential medical information used with the teleradiology network system. The confidential medical information is exposed to the risk of the damage and intercept. Secret sharing scheme is a method of dividing the confidential medical information into two or more tallies. Individual medical information cannot be decoded by using one tally at all. Our method has the function of RAID. With RAID technology, if there is a failure in a single tally, there is redundant data already copied to other tally. Confidential information is preserved at an individual Data Center connected through internet because individual medical information cannot be decoded by using one tally at all. Therefore, even if one of the Data Centers is struck and information is damaged, the confidential medical information can be decoded by using the tallies preserved at the data center to which it escapes damage. We can safely share the screen of workstation to which the medical image of Data Center is displayed from two or more web conference terminals at the same time. Moreover, Real time biometric face authentication system is connected with Data Center. Real time biometric face authentication system analyzes the feature of the face image of which it takes a picture in 20 seconds with the camera and defends the safety of the medical information. We propose a new information transmission method and a new information storage method with a new information security solution.

  9. Neural networks for data compression and invariant image recognition

    NASA Technical Reports Server (NTRS)

    Gardner, Sheldon

    1989-01-01

    An approach to invariant image recognition (I2R), based upon a model of biological vision in the mammalian visual system (MVS), is described. The complete I2R model incorporates several biologically inspired features: exponential mapping of retinal images, Gabor spatial filtering, and a neural network associative memory. In the I2R model, exponentially mapped retinal images are filtered by a hierarchical set of Gabor spatial filters (GSF) which provide compression of the information contained within a pixel-based image. A neural network associative memory (AM) is used to process the GSF coded images. We describe a 1-D shape function method for coding of scale and rotationally invariant shape information. This method reduces image shape information to a periodic waveform suitable for coding as an input vector to a neural network AM. The shape function method is suitable for near term applications on conventional computing architectures equipped with VLSI FFT chips to provide a rapid image search capability.

  10. CD-ROM Networking: Navigating through VINES and NetWare and the New Software Technologies.

    ERIC Educational Resources Information Center

    Lieberman, Paula

    1995-01-01

    Provides an overview of developments in CD-ROM networking technology and describes products offered by Axis, Banyan (VINES--network operating environment), CD Connection, Celerity, Data/Ware, Document Imaging Systems Corporation (DISC), Imagery, Jodian, Meridian, Micro Design International, Microsoft, Microtest, Novell, OnLine Computer Systems,…

  11. A medical application integrating remote 3D visualization tools to access picture archiving and communication system on mobile devices.

    PubMed

    He, Longjun; Ming, Xing; Liu, Qian

    2014-04-01

    With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. However, for direct interactive 3D visualization, which plays an important role in radiological diagnosis, the mobile device cannot provide a satisfactory quality of experience for radiologists. This paper developed a medical system that can get medical images from the picture archiving and communication system on the mobile device over the wireless network. In the proposed application, the mobile device got patient information and medical images through a proxy server connecting to the PACS server. Meanwhile, the proxy server integrated a range of 3D visualization techniques, including maximum intensity projection, multi-planar reconstruction and direct volume rendering, to providing shape, brightness, depth and location information generated from the original sectional images for radiologists. Furthermore, an algorithm that changes remote render parameters automatically to adapt to the network status was employed to improve the quality of experience. Finally, performance issues regarding the remote 3D visualization of the medical images over the wireless network of the proposed application were also discussed. The results demonstrated that this proposed medical application could provide a smooth interactive experience in the WLAN and 3G networks.

  12. Computer interpretation of thallium SPECT studies based on neural network analysis

    NASA Astrophysics Data System (ADS)

    Wang, David C.; Karvelis, K. C.

    1991-06-01

    A class of artificial intelligence (Al) programs known as neural networks are well suited to pattern recognition. A neural network is trained rather than programmed to recognize patterns. This differs from "expert system" Al programs in that it is not following an extensive set of rules determined by the programmer, but rather bases its decision on a gestalt interpretation of the image. The "bullseye" images from cardiac stress thallium tests performed on 50 male patients, as well as several simulated images were used to train the network. The network was able to accurately classify all patients in the training set. The network was then tested against 50 unknown patients and was able to correctly categorize 77% of the areas of ischemia and 92% of the areas of infarction. While not yet matching the ability of a trained physician, the neural network shows great promise in this area and has potential application in other areas of medical imaging.

  13. Face recognition: a convolutional neural-network approach.

    PubMed

    Lawrence, S; Giles, C L; Tsoi, A C; Back, A D

    1997-01-01

    We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.

  14. Neural networks for sign language translation

    NASA Astrophysics Data System (ADS)

    Wilson, Beth J.; Anspach, Gretel

    1993-09-01

    A neural network is used to extract relevant features of sign language from video images of a person communicating in American Sign Language or Signed English. The key features are hand motion, hand location with respect to the body, and handshape. A modular hybrid design is under way to apply various techniques, including neural networks, in the development of a translation system that will facilitate communication between deaf and hearing people. One of the neural networks described here is used to classify video images of handshapes into their linguistic counterpart in American Sign Language. The video image is preprocessed to yield Fourier descriptors that encode the shape of the hand silhouette. These descriptors are then used as inputs to a neural network that classifies their shapes. The network is trained with various examples from different signers and is tested with new images from new signers. The results have shown that for coarse handshape classes, the network is invariant to the type of camera used to film the various signers and to the segmentation technique.

  15. Intercampus network of the Department of Radiology, School of Medicine, Indiana University

    NASA Astrophysics Data System (ADS)

    Witt, Robert M.; Gibbs, Thomas; Holden, Robert W.

    1994-05-01

    During the past year, the Department of Radiology, School of Medicine, Indiana University designed, specified, and installed a campus wide network. The network supports three functions: a laser camera network to allow the transfer of hard copy images across the campus; a positron emission tomography (PET) network to allow the interconnection of the workstations comprising the PET system; and a future personal computer network to allow support of departmental administrative functions with an upgrade path to allow the display of soft copy images in physician offices and other locations in the department.

  16. Pigment network-based skin cancer detection.

    PubMed

    Alfed, Naser; Khelifi, Fouad; Bouridane, Ahmed; Seker, Huseyin

    2015-08-01

    Diagnosing skin cancer in its early stages is a challenging task for dermatologists given the fact that the chance for a patient's survival is higher and hence the process of analyzing skin images and making decisions should be time efficient. Therefore, diagnosing the disease using automated and computerized systems has nowadays become essential. This paper proposes an efficient system for skin cancer detection on dermoscopic images. It has been shown that the statistical characteristics of the pigment network, extracted from the dermoscopic image, could be used as efficient discriminating features for cancer detection. The proposed system has been assessed on a dataset of 200 dermoscopic images of the `Hospital Pedro Hispano' [1] and the results of cross-validation have shown high detection accuracy.

  17. A Health Insurance Portability and Accountability Act–Compliant Ocular Telehealth Network for the Remote Diagnosis and Management of Diabetic Retinopathy

    PubMed Central

    Li, Yaqin; Karnowski, Thomas P.; Tobin, Kenneth W.; Giancardo, Luca; Morris, Scott; Sparrow, Sylvia E.; Garg, Seema; Fox, Karen

    2011-01-01

    Abstract In this article, we present the design and implementation of a regional ocular telehealth network for remote assessment and management of diabetic retinopathy (DR), including the design requirements, network topology, protocol design, system work flow, graphics user interfaces, and performance evaluation. The Telemedical Retinal Image Analysis and Diagnosis Network is a computer-aided, image analysis telehealth paradigm for the diagnosis of DR and other retinal diseases using fundus images acquired from primary care end users delivering care to underserved patient populations in the mid-South and southeastern United States. PMID:21819244

  18. Wireless Command-and-Control of UAV-Based Imaging LANs

    NASA Technical Reports Server (NTRS)

    Herwitz, Stanley; Dunagan, S. E.; Sullivan, D. V.; Slye, R. E.; Leung, J. G.; Johnson, L. F.

    2006-01-01

    Dual airborne imaging system networks were operated using a wireless line-of-sight telemetry system developed as part of a 2002 unmanned aerial vehicle (UAV) imaging mission over the USA s largest coffee plantation on the Hawaiian island of Kauai. A primary mission objective was the evaluation of commercial-off-the-shelf (COTS) 802.11b wireless technology for reduction of payload telemetry costs associated with UAV remote sensing missions. Predeployment tests with a conventional aircraft demonstrated successful wireless broadband connectivity between a rapidly moving airborne imaging local area network (LAN) and a fixed ground station LAN. Subsequently, two separate LANs with imaging payloads, packaged in exterior-mounted pressure pods attached to the underwing of NASA's Pathfinder-Plus UAV, were operated wirelessly by ground-based LANs over independent Ethernet bridges. Digital images were downlinked from the solar-powered aircraft at data rates of 2-6 megabits per second (Mbps) over a range of 6.5 9.5 km. An integrated wide area network enabled payload monitoring and control through the Internet from a range of ca. 4000 km during parts of the mission. The recent advent of 802.11g technology is expected to boost the system data rate by about a factor of five.

  19. Deep learning application: rubbish classification with aid of an android device

    NASA Astrophysics Data System (ADS)

    Liu, Sijiang; Jiang, Bo; Zhan, Jie

    2017-06-01

    Deep learning is a very hot topic currently in pattern recognition and artificial intelligence researches. Aiming at the practical problem that people usually don't know correct classifications some rubbish should belong to, based on the powerful image classification ability of the deep learning method, we have designed a prototype system to help users to classify kinds of rubbish. Firstly the CaffeNet Model was adopted for our classification network training on the ImageNet dataset, and the trained network was deployed on a web server. Secondly an android app was developed for users to capture images of unclassified rubbish, upload images to the web server for analyzing backstage and retrieve the feedback, so that users can obtain the classification guide by an android device conveniently. Tests on our prototype system of rubbish classification show that: an image of one single type of rubbish with origin shape can be better used to judge its classification, while an image containing kinds of rubbish or rubbish with changed shape may fail to help users to decide rubbish's classification. However, the system still shows promising auxiliary function for rubbish classification if the network training strategy can be optimized further.

  20. PDA-phone-based instant transmission of radiological images over a CDMA network by combining the PACS screen with a Bluetooth-interfaced local wireless link.

    PubMed

    Kim, Dong Keun; Yoo, Sun K; Park, Jeong Jin; Kim, Sun Ho

    2007-06-01

    Remote teleconsultation by specialists is important for timely, correct, and specialized emergency surgical and medical decision making. In this paper, we designed a new personal digital assistant (PDA)-phone-based emergency teleradiology system by combining cellular communication with Bluetooth-interfaced local wireless links. The mobility and portability resulting from the use of PDAs and wireless communication can provide a more effective means of emergency teleconsultation without requiring the user to be limited to a fixed location. Moreover, it enables synchronized radiological image sharing between the attending physician in the emergency room and the remote specialist on picture archiving and communication system terminals without distorted image acquisition. To enable rapid and fine-quality radiological image transmission over a cellular network in a secure manner, progressive compression and security mechanisms have been incorporated. The proposed system is tested over a code division Multiple Access 1x-Evolution Data-Only network to evaluate the performance and to demonstrate the feasibility of this system in a real-world setting.

  1. [A study of the transport of three dimensional medical images to remote institutions for telediagnosis].

    PubMed

    Hayashi, Takashi; Iwai, Mitsuhiro; Takahashi, Katsuhiko; Takeda, Satoshi; Tateishi, Toshiki; Kaneko, Rumi; Ogasawara, Yoko; Yonezawa, Kazuya; Hanada, Akiko

    2011-01-01

    Using a 3D-imaging-create-function server and network services by IP-VPN, we began to deliver 3D images to the remote institution. An indication trial of the primary image, a rotary trial of a 3D image, and a reproducibility trial were studied in order to examine the practicality of using the system in a real network between Hakodate and Sapporo (communication distance of about 150 km). In these trials, basic data (time and receiving data volume) were measured for every variation of QF (quality factor) or monitor resolution. Analyzing the results of the system using a 3D image delivery server of our hospital with variations in the setting of QF and monitor resolutions, we concluded that this system has practicality in the remote interpretation-of-radiogram work, even if the access point of the region has a line speed of 6 Mbps.

  2. A Framework for Integration of Heterogeneous Medical Imaging Networks

    PubMed Central

    Viana-Ferreira, Carlos; Ribeiro, Luís S; Costa, Carlos

    2014-01-01

    Medical imaging is increasing its importance in matters of medical diagnosis and in treatment support. Much is due to computers that have revolutionized medical imaging not only in acquisition process but also in the way it is visualized, stored, exchanged and managed. Picture Archiving and Communication Systems (PACS) is an example of how medical imaging takes advantage of computers. To solve problems of interoperability of PACS and medical imaging equipment, the Digital Imaging and Communications in Medicine (DICOM) standard was defined and widely implemented in current solutions. More recently, the need to exchange medical data between distinct institutions resulted in Integrating the Healthcare Enterprise (IHE) initiative that contains a content profile especially conceived for medical imaging exchange: Cross Enterprise Document Sharing for imaging (XDS-i). Moreover, due to application requirements, many solutions developed private networks to support their services. For instance, some applications support enhanced query and retrieve over DICOM objects metadata. This paper proposes anintegration framework to medical imaging networks that provides protocols interoperability and data federation services. It is an extensible plugin system that supports standard approaches (DICOM and XDS-I), but is also capable of supporting private protocols. The framework is being used in the Dicoogle Open Source PACS. PMID:25279021

  3. A framework for integration of heterogeneous medical imaging networks.

    PubMed

    Viana-Ferreira, Carlos; Ribeiro, Luís S; Costa, Carlos

    2014-01-01

    Medical imaging is increasing its importance in matters of medical diagnosis and in treatment support. Much is due to computers that have revolutionized medical imaging not only in acquisition process but also in the way it is visualized, stored, exchanged and managed. Picture Archiving and Communication Systems (PACS) is an example of how medical imaging takes advantage of computers. To solve problems of interoperability of PACS and medical imaging equipment, the Digital Imaging and Communications in Medicine (DICOM) standard was defined and widely implemented in current solutions. More recently, the need to exchange medical data between distinct institutions resulted in Integrating the Healthcare Enterprise (IHE) initiative that contains a content profile especially conceived for medical imaging exchange: Cross Enterprise Document Sharing for imaging (XDS-i). Moreover, due to application requirements, many solutions developed private networks to support their services. For instance, some applications support enhanced query and retrieve over DICOM objects metadata. This paper proposes anintegration framework to medical imaging networks that provides protocols interoperability and data federation services. It is an extensible plugin system that supports standard approaches (DICOM and XDS-I), but is also capable of supporting private protocols. The framework is being used in the Dicoogle Open Source PACS.

  4. Deep Learning Automates the Quantitative Analysis of Individual Cells in Live-Cell Imaging Experiments.

    PubMed

    Van Valen, David A; Kudo, Takamasa; Lane, Keara M; Macklin, Derek N; Quach, Nicolas T; DeFelice, Mialy M; Maayan, Inbal; Tanouchi, Yu; Ashley, Euan A; Covert, Markus W

    2016-11-01

    Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domains of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems.

  5. Deep Learning Automates the Quantitative Analysis of Individual Cells in Live-Cell Imaging Experiments

    DOE PAGES

    Van Valen, David A.; Kudo, Takamasa; Lane, Keara M.; ...

    2016-11-04

    Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domainsmore » of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems.« less

  6. Edge detection for optical synthetic aperture based on deep neural network

    NASA Astrophysics Data System (ADS)

    Tan, Wenjie; Hui, Mei; Liu, Ming; Kong, Lingqin; Dong, Liquan; Zhao, Yuejin

    2017-09-01

    Synthetic aperture optics systems can meet the demands of the next-generation space telescopes being lighter, larger and foldable. However, the boundaries of segmented aperture systems are much more complex than that of the whole aperture. More edge regions mean more imaging edge pixels, which are often mixed and discretized. In order to achieve high-resolution imaging, it is necessary to identify the gaps between the sub-apertures and the edges of the projected fringes. In this work, we introduced the algorithm of Deep Neural Network into the edge detection of optical synthetic aperture imaging. According to the detection needs, we constructed image sets by experiments and simulations. Based on MatConvNet, a toolbox of MATLAB, we ran the neural network, trained it on training image set and tested its performance on validation set. The training was stopped when the test error on validation set stopped declining. As an input image is given, each intra-neighbor area around the pixel is taken into the network, and scanned pixel by pixel with the trained multi-hidden layers. The network outputs make a judgment on whether the center of the input block is on edge of fringes. We experimented with various pre-processing and post-processing techniques to reveal their influence on edge detection performance. Compared with the traditional algorithms or their improvements, our method makes decision on a much larger intra-neighbor, and is more global and comprehensive. Experiments on more than 2,000 images are also given to prove that our method outperforms classical algorithms in optical images-based edge detection.

  7. Deep Learning Automates the Quantitative Analysis of Individual Cells in Live-Cell Imaging Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Valen, David A.; Kudo, Takamasa; Lane, Keara M.

    Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domainsmore » of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems.« less

  8. Deep Learning Automates the Quantitative Analysis of Individual Cells in Live-Cell Imaging Experiments

    PubMed Central

    Van Valen, David A.; Lane, Keara M.; Quach, Nicolas T.; Maayan, Inbal

    2016-01-01

    Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domains of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems. PMID:27814364

  9. Initial Experience With A Prototype Storage System At The University Of North Carolina

    NASA Astrophysics Data System (ADS)

    Creasy, J. L.; Loendorf, D. D.; Hemminger, B. M.

    1986-06-01

    A prototype archiving system manufactured by the 3M Corporation has been in place at the University of North Carolina for approximately 12 months. The system was installed as a result of a collaboration between 3M and UNC, with 3M seeking testing of their system, and UNC realizing the need for an archiving system as an essential part of their PACS test-bed facilities. System hardware includes appropriate network and disk interface devices as well as media for both short and long term storage of images and their associated information. The system software includes those procedures necessary to communicate with the network interface elements(NIEs) as well as those procedures necessary to interpret the ACR-NEMA header blocks and to store the images. A subset of the total ACR-NEMA header is parsed and stored in a relational database system. The entire header is stored on disk with the completed study. Interactive programs have been developed that allow radiologists to easily retrieve information about the archived images and to send the full images to a viewing console. Initial experience with the system has consisted primarily of hardware and software debugging. Although the system is ACR-NEMA compatable, further objective and subjective assessments of system performance is awaiting the connection of compatable consoles and acquisition devices to the network.

  10. A Markov chain model for image ranking system in social networks

    NASA Astrophysics Data System (ADS)

    Zin, Thi Thi; Tin, Pyke; Toriu, Takashi; Hama, Hiromitsu

    2014-03-01

    In today world, different kinds of networks such as social, technological, business and etc. exist. All of the networks are similar in terms of distributions, continuously growing and expanding in large scale. Among them, many social networks such as Facebook, Twitter, Flickr and many others provides a powerful abstraction of the structure and dynamics of diverse kinds of inter personal connection and interaction. Generally, the social network contents are created and consumed by the influences of all different social navigation paths that lead to the contents. Therefore, identifying important and user relevant refined structures such as visual information or communities become major factors in modern decision making world. Moreover, the traditional method of information ranking systems cannot be successful due to their lack of taking into account the properties of navigation paths driven by social connections. In this paper, we propose a novel image ranking system in social networks by using the social data relational graphs from social media platform jointly with visual data to improve the relevance between returned images and user intentions (i.e., social relevance). Specifically, we propose a Markov chain based Social-Visual Ranking algorithm by taking social relevance into account. By using some extensive experiments, we demonstrated the significant and effectiveness of the proposed social-visual ranking method.

  11. Observations of thunderstorm-related 630 nm airglow depletions

    NASA Astrophysics Data System (ADS)

    Kendall, E. A.; Bhatt, A.

    2015-12-01

    The Midlatitude All-sky imaging Network for Geophysical Observations (MANGO) is an NSF-funded network of 630 nm all-sky imagers in the continental United States. MANGO will be used to observe the generation, propagation, and dissipation of medium and large-scale wave activity in the subauroral, mid and low-latitude thermosphere. This network is actively being deployed and will ultimately consist of nine all-sky imagers. These imagers form a network providing continuous coverage over the western United States, including California, Oregon, Washington, Utah, Arizona and Texas extending south into Mexico. This network sees high levels of both medium and large scale wave activity. Apart from the widely reported northeast to southwest propagating wave fronts resulting from the so called Perkins mechanism, this network observes wave fronts propagating to the west, north and northeast. At least three of these anomalous events have been associated with thunderstorm activity. Imager data has been correlated with both GPS data and data from the AIRS (Atmospheric Infrared Sounder) instrument on board NASA's Earth Observing System Aqua satellite. We will present a comprehensive analysis of these events and discuss the potential thunderstorm source mechanism.

  12. Modulation transfer function estimation of optical lens system by adaptive neuro-fuzzy methodology

    NASA Astrophysics Data System (ADS)

    Petković, Dalibor; Shamshirband, Shahaboddin; Pavlović, Nenad T.; Anuar, Nor Badrul; Kiah, Miss Laiha Mat

    2014-07-01

    The quantitative assessment of image quality is an important consideration in any type of imaging system. The modulation transfer function (MTF) is a graphical description of the sharpness and contrast of an imaging system or of its individual components. The MTF is also known and spatial frequency response. The MTF curve has different meanings according to the corresponding frequency. The MTF of an optical system specifies the contrast transmitted by the system as a function of image size, and is determined by the inherent optical properties of the system. In this study, the adaptive neuro-fuzzy (ANFIS) estimator is designed and adapted to estimate MTF value of the actual optical system. Neural network in ANFIS adjusts parameters of membership function in the fuzzy logic of the fuzzy inference system. The back propagation learning algorithm is used for training this network. This intelligent estimator is implemented using Matlab/Simulink and the performances are investigated. The simulation results presented in this paper show the effectiveness of the developed method.

  13. Clinical Neuropathology Views - 2/2016: Digital networking in European neuropathology: An initiative to facilitate truly interactive consultations.

    PubMed

    Idoate, Miguel A; García-Rojo, Marcial

    2016-01-01

    Digital technology is progressively changing our vision of the practice of neuropathology. There are a number of facts that support the introduction of digital neuropathology. With the development of wholeslide imaging (WSI) systems the difficulties involved in implementing a neuropathology network have been solved. A relevant difficulty has been image standardization, but an open digital image communication protocol defined by the Digital Imaging and Communications in Medicine (DICOM) standard is already a reality. The neuropathology network should be established in Europe because it is the expected geographic context for relationships among European neuropathologists. There are several limitations in the implementation of a digital neuropathology consultancy network such as financial support, operational costs, legal issues, and technical assistance of clients. All of these items have been considered and should be solved before implementing the proposal. Finally, the authors conclude that a European digital neuropathology network should be created for patients' benefit.

  14. Proceedings of the 1987 IEEE international conference on systems, man, and cybernetics. Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1987-01-01

    This book contains the proceedings of the IEE international conference on systems Man, and cybernetics. Topics include the following: robotics; knowledge base simulation; software systems, image and pattern recognition; neural networks; and image processing.

  15. Magnetoencephalographic imaging of deep corticostriatal network activity during a rewards paradigm.

    PubMed

    Kanal, Eliezer Y; Sun, Mingui; Ozkurt, Tolga E; Jia, Wenyan; Sclabassi, Robert

    2009-01-01

    The human rewards network is a complex system spanning both cortical and subcortical regions. While much is known about the functions of the various components of the network, research on the behavior of the network as a whole has been stymied due to an inability to detect signals at a high enough temporal resolution from both superficial and deep network components simultaneously. In this paper, we describe the application of magnetoencephalographic imaging (MEG) combined with advanced signal processing techniques to this problem. Using data collected while subjects performed a rewards-related gambling paradigm demonstrated to activate the rewards network, we were able to identify neural signals which correspond to deep network activity. We also show that this signal was not observable prior to filtration. These results suggest that MEG imaging may be a viable tool for the detection of deep neural activity.

  16. Development of a networked four-million-pixel pathological and radiological digital image presentation system and its application to medical conferences

    NASA Astrophysics Data System (ADS)

    Sakano, Toshikazu; Furukawa, Isao; Okumura, Akira; Yamaguchi, Takahiro; Fujii, Tetsuro; Ono, Sadayasu; Suzuki, Junji; Matsuya, Shoji; Ishihara, Teruo

    2001-08-01

    The wide spread of digital technology in the medical field has led to a demand for the high-quality, high-speed, and user-friendly digital image presentation system in the daily medical conferences. To fulfill this demand, we developed a presentation system for radiological and pathological images. It is composed of a super-high-definition (SHD) imaging system, a radiological image database (R-DB), a pathological image database (P-DB), and the network interconnecting these three. The R-DB consists of a 270GB RAID, a database server workstation, and a film digitizer. The P-DB includes an optical microscope, a four-million-pixel digital camera, a 90GB RAID, and a database server workstation. A 100Mbps Ethernet LAN interconnects all the sub-systems. The Web-based system operation software was developed for easy operation. We installed the whole system in NTT East Kanto Hospital to evaluate it in the weekly case conferences. The SHD system could display digital full-color images of 2048 x 2048 pixels on a 28-inch CRT monitor. The doctors evaluated the image quality and size, and found them applicable to the actual medical diagnosis. They also appreciated short image switching time that contributed to smooth presentation. Thus, we confirmed that its characteristics met the requirements.

  17. Cephalometric landmark detection in dental x-ray images using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Lee, Hansang; Park, Minseok; Kim, Junmo

    2017-03-01

    In dental X-ray images, an accurate detection of cephalometric landmarks plays an important role in clinical diagnosis, treatment and surgical decisions for dental problems. In this work, we propose an end-to-end deep learning system for cephalometric landmark detection in dental X-ray images, using convolutional neural networks (CNN). For detecting 19 cephalometric landmarks in dental X-ray images, we develop a detection system using CNN-based coordinate-wise regression systems. By viewing x- and y-coordinates of all landmarks as 38 independent variables, multiple CNN-based regression systems are constructed to predict the coordinate variables from input X-ray images. First, each coordinate variable is normalized by the length of either height or width of an image. For each normalized coordinate variable, a CNN-based regression system is trained on training images and corresponding coordinate variable, which is a variable to be regressed. We train 38 regression systems with the same CNN structure on coordinate variables, respectively. Finally, we compute 38 coordinate variables with these trained systems from unseen images and extract 19 landmarks by pairing the regressed coordinates. In experiments, the public database from the Grand Challenges in Dental X-ray Image Analysis in ISBI 2015 was used and the proposed system showed promising performance by successfully locating the cephalometric landmarks within considerable margins from the ground truths.

  18. Remote consultation and diagnosis in medical imaging using a global PACS backbone network

    NASA Astrophysics Data System (ADS)

    Martinez, Ralph; Sutaria, Bijal N.; Kim, Jinman; Nam, Jiseung

    1993-10-01

    A Global PACS is a national network which interconnects several PACS networks at medical and hospital complexes using a national backbone network. A Global PACS environment enables new and beneficial operations between radiologists and physicians, when they are located in different geographical locations. One operation allows the radiologist to view the same image folder at both Local and Remote sites so that a diagnosis can be performed. The paper describes the user interface, database management, and network communication software which has been developed in the Computer Engineering Research Laboratory and Radiology Research Laboratory. Specifically, a design for a file management system in a distributed environment is presented. In the remote consultation and diagnosis operation, a set of images is requested from the database archive system and sent to the Local and Remote workstation sites on the Global PACS network. Viewing the same images, the radiologists use pointing overlay commands, or frames to point out features on the images. Each workstation transfers these frames, to the other workstation, so that an interactive session for diagnosis takes place. In this phase, we use fixed frames and variable size frames, used to outline an object. The data pockets for these frames traverses the national backbone in real-time. We accomplish this feature by using TCP/IP protocol sockets for communications. The remote consultation and diagnosis operation has been tested in real-time between the University Medical Center and the Bowman Gray School of Medicine at Wake Forest University, over the Internet. In this paper, we show the feasibility of the operation in a Global PACS environment. Future improvements to the system will include real-time voice and interactive compressed video scenarios.

  19. Edge-preserving image compression for magnetic-resonance images using dynamic associative neural networks (DANN)-based neural networks

    NASA Astrophysics Data System (ADS)

    Wan, Tat C.; Kabuka, Mansur R.

    1994-05-01

    With the tremendous growth in imaging applications and the development of filmless radiology, the need for compression techniques that can achieve high compression ratios with user specified distortion rates becomes necessary. Boundaries and edges in the tissue structures are vital for detection of lesions and tumors, which in turn requires the preservation of edges in the image. The proposed edge preserving image compressor (EPIC) combines lossless compression of edges with neural network compression techniques based on dynamic associative neural networks (DANN), to provide high compression ratios with user specified distortion rates in an adaptive compression system well-suited to parallel implementations. Improvements to DANN-based training through the use of a variance classifier for controlling a bank of neural networks speed convergence and allow the use of higher compression ratios for `simple' patterns. The adaptation and generalization capabilities inherent in EPIC also facilitate progressive transmission of images through varying the number of quantization levels used to represent compressed patterns. Average compression ratios of 7.51:1 with an averaged average mean squared error of 0.0147 were achieved.

  20. NEFI: Network Extraction From Images

    PubMed Central

    Dirnberger, M.; Kehl, T.; Neumann, A.

    2015-01-01

    Networks are amongst the central building blocks of many systems. Given a graph of a network, methods from graph theory enable a precise investigation of its properties. Software for the analysis of graphs is widely available and has been applied to study various types of networks. In some applications, graph acquisition is relatively simple. However, for many networks data collection relies on images where graph extraction requires domain-specific solutions. Here we introduce NEFI, a tool that extracts graphs from images of networks originating in various domains. Regarding previous work on graph extraction, theoretical results are fully accessible only to an expert audience and ready-to-use implementations for non-experts are rarely available or insufficiently documented. NEFI provides a novel platform allowing practitioners to easily extract graphs from images by combining basic tools from image processing, computer vision and graph theory. Thus, NEFI constitutes an alternative to tedious manual graph extraction and special purpose tools. We anticipate NEFI to enable time-efficient collection of large datasets. The analysis of these novel datasets may open up the possibility to gain new insights into the structure and function of various networks. NEFI is open source and available at http://nefi.mpi-inf.mpg.de. PMID:26521675

  1. The cognitive structural approach for image restoration

    NASA Astrophysics Data System (ADS)

    Mardare, Igor; Perju, Veacheslav; Casasent, David

    2008-03-01

    It is analyzed the important and actual problem of the defective images of scenes restoration. The proposed approach provides restoration of scenes by a system on the basis of human intelligence phenomena reproduction used for restoration-recognition of images. The cognitive models of the restoration process are elaborated. The models are realized by the intellectual processors constructed on the base of neural networks and associative memory using neural network simulator NNToolbox from MATLAB 7.0. The models provides restoration and semantic designing of images of scenes under defective images of the separate objects.

  2. Impact of ultrasound video transfer on the practice of ultrasound

    NASA Astrophysics Data System (ADS)

    Duerinckx, Andre J.; Hayrapetian, Alek S.; Grant, Edward G.; Valentino, Daniel J.; Rahbar, Darius; Kiszonas, Mike; Franco, Ricky; Melany, Michelle; Narin, Sherelle L.; Ragavendra, Nagesh

    1996-05-01

    Sonography can be highly dependent on real-time imaging and as such is highly physician intensive. Such situations arise mostly during complicated ultrasound radiology studies or echocardiology examinations. Under those circumstances it would be of benefit to transmit real-time images beyond the immediate area of the ultrasound laboratory when a physician is not on location. We undertook this study to determine if both static and dynamic image transfer to remote locations might be accomplished using an ultrafast ATM network and PACS. Image management of the local image files was performed by a commercial PACS from AGFA corporation. The local network was Ethernet based, and the global network was based on Asynchronous Transfer Mode (ATM, rates up to 100 Mbits/sec). Real-time image transfer involved two teaching hospitals, one of which had 2 separate ultrasound facilities. Radiologists consulted with technologists via telephone while the examinations were being performed. The applications of ATM network providing real time video for ultrasound imaging in a clinical environment and its potential impact on health delivery and clinical teaching. This technology increased technologist and physician productivity due to the elimination of commute time for physicians and waiting time for technologists and patients. Physician confidence in diagnosis increased compared to reviewing static images alone. This system provided instant access for radiologists to real-time scans from remote sites. Image quality and frame rate were equivalent to the original. The system increased productivity by allowing physicians to monitor studies at multiple sites simultaneously.

  3. Image Understanding by Image-Seeking Adaptive Networks (ISAN).

    DTIC Science & Technology

    1987-08-10

    our reserch on adaptive neural networks in the visual and sensory-motor cortex of cats. We demonstrate that, under certain conditions, plasticity is...understanding in organisms proceeds directly from adaptively seeking whole images and not via a preliminary analysis of elementary features, followed by object...empirical reserch has always been that ultimately any neural system has to serve behavior and that behavior serves survival. Evolutionary selection makes it

  4. Using PACS and wavelet-based image compression in a wide-area network to support radiation therapy imaging applications for satellite hospitals

    NASA Astrophysics Data System (ADS)

    Smith, Charles L.; Chu, Wei-Kom; Wobig, Randy; Chao, Hong-Yang; Enke, Charles

    1999-07-01

    An ongoing PACS project at our facility has been expanded to include providing and managing images used for routine clinical operation of the department of radiation oncology. The intent of our investigation has been to enable out clinical radiotherapy service to enter the tele-medicine environment through the use of a PACS system initially implemented in the department of radiology. The backbone for the imaging network includes five CT and three MR scanners located across three imaging centers. A PC workstation in the department of radiation oncology was used to transmit CT imags to a satellite facility located approximately 60 miles from the primary center. Chest CT images were used to analyze network transmission performance. Connectivity established between the primary department and satellite has fulfilled all image criteria required by the oncologist. Establishing the link tot eh oncologist at the satellite diminished bottlenecking of imaging related tasks at the primary facility due to physician absence. A 30:1 compression ratio using a wavelet-based algorithm provided clinically acceptable images treatment planning. Clinical radiotherapy images can be effectively managed in a wide- area-network to link satellite facilities to larger clinical centers.

  5. Filmless radiology: The design, integration, implementation, and evaluation of a digital imaging network. Potential investigations to be conducted in conjunction with the Digital-Imaging Network System (DINS) evaluation project. Revision 1. Annual report, 1 March 1987-28 February 1988

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerlin, B.D.; Cerva, J.R.; Glenn, M.E.

    This document describes evaluation studies and technical investigations proposed for the three-year Digital Imaging Network System (DINS) prototype project, sponsored by the U.S. Army Medical Research and Development Command, Ft. Detrick, Maryland. The project has three overall goals. The first is to install and operate a prototype DINS at each of two University-based hospitals for test purposes. The second is to evaluate key aspects of each prototype system once it is in full operation. The third is to develop guidelines and specifications for an operational DINS suitable for use by the military and others developing systems of the future. Thismore » document defines twelve overall evaluative questions for use in meeting the second and third objectives of the project and proposes studies that will answer these questions.« less

  6. Architectures for Device Aware Network

    DTIC Science & Technology

    2005-03-01

    68 b. PDA in DAN Mode ............................................................. 69 c. Cell Phone in DAN Mode...68 Figure 15. PDA in DAN Mode - Reduced Resolution Image ..................................... 69 Figure 16. Cell Phone in DAN Mode -No Image...computer, notebook computer, cell phone and a host of networked embedded systems) may have extremely differing capabilities and resources to retrieve and

  7. Visual Based Retrieval Systems and Web Mining--Introduction.

    ERIC Educational Resources Information Center

    Iyengar, S. S.

    2001-01-01

    Briefly discusses Web mining and image retrieval techniques, and then presents a summary of articles in this special issue. Articles focus on Web content mining, artificial neural networks as tools for image retrieval, content-based image retrieval systems, and personalizing the Web browsing experience using media agents. (AEF)

  8. Comparison of eye imaging pattern recognition using neural network

    NASA Astrophysics Data System (ADS)

    Bukhari, W. M.; Syed A., M.; Nasir, M. N. M.; Sulaima, M. F.; Yahaya, M. S.

    2015-05-01

    The beauty of eye recognition system that it is used in automatic identifying and verifies a human weather from digital images or video source. There are various behaviors of the eye such as the color of the iris, size of pupil and shape of the eye. This study represents the analysis, design and implementation of a system for recognition of eye imaging. All the eye images that had been captured from the webcam in RGB format must through several techniques before it can be input for the pattern and recognition processes. The result shows that the final value of weight and bias after complete training 6 eye images for one subject is memorized by the neural network system and be the reference value of the weight and bias for the testing part. The target classifies to 5 different types for 5 subjects. The eye images can recognize the subject based on the target that had been set earlier during the training process. When the values between new eye image and the eye image in the database are almost equal, it is considered the eye image is matched.

  9. Cerebral microbleed detection in traumatic brain injury patients using 3D convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Standvoss, K.; Crijns, T.; Goerke, L.; Janssen, D.; Kern, S.; van Niedek, T.; van Vugt, J.; Alfonso Burgos, N.; Gerritse, E. J.; Mol, J.; van de Vooren, D.; Ghafoorian, M.; van den Heuvel, T. L. A.; Manniesing, R.

    2018-02-01

    The number and location of cerebral microbleeds (CMBs) in patients with traumatic brain injury (TBI) is important to determine the severity of trauma and may hold prognostic value for patient outcome. However, manual assessment is subjective and time-consuming due to the resemblance of CMBs to blood vessels, the possible presence of imaging artifacts, and the typical heterogeneity of trauma imaging data. In this work, we present a computer aided detection system based on 3D convolutional neural networks for detecting CMBs in 3D susceptibility weighted images. Network architectures with varying depth were evaluated. Data augmentation techniques were employed to improve the networks' generalization ability and selective sampling was implemented to handle class imbalance. The predictions of the models were clustered using a connected component analysis. The system was trained on ten annotated scans and evaluated on an independent test set of eight scans. Despite this limited data set, the system reached a sensitivity of 0.87 at 16.75 false positives per scan (2.5 false positives per CMB), outperforming related work on CMB detection in TBI patients.

  10. Multispectral image fusion using neural networks

    NASA Technical Reports Server (NTRS)

    Kagel, J. H.; Platt, C. A.; Donaven, T. W.; Samstad, E. A.

    1990-01-01

    A prototype system is being developed to demonstrate the use of neural network hardware to fuse multispectral imagery. This system consists of a neural network IC on a motherboard, a circuit card assembly, and a set of software routines hosted by a PC-class computer. Research in support of this consists of neural network simulations fusing 4 to 7 bands of Landsat imagery and fusing (separately) multiple bands of synthetic imagery. The simulations, results, and a description of the prototype system are presented.

  11. The exploration of outer space with cameras: A history of the NASA unmanned spacecraft missions

    NASA Astrophysics Data System (ADS)

    Mirabito, M. M.

    The use of television cameras and other video imaging devices to explore the solar system's planetary bodies with unmanned spacecraft is chronicled. Attention is given to the missions and the imaging devices, beginning with the Ranger 7 moon mission, which featured the first successfully operated electrooptical subsystem, six television cameras with vidicon image sensors. NASA established a network of parabolic, ground-based antennas on the earth (the Deep Space Network) to receive signals from spacecraft travelling farther than 16,000 km into space. The image processing and enhancement techniques used to convert spacecraft data transmissions into black and white and color photographs are described, together with the technological requirements that drove the development of the various systems. Terrestrial applications of the planetary imaging systems are explored, including medical and educational uses. Finally, the implementation and functional characteristics of CCDs are detailed, noting their installation on the Space Telescope.

  12. Microcapillary imaging of lamina cribrosa in porcine eyes using photoacoustic microscopy

    NASA Astrophysics Data System (ADS)

    Moothanchery, Mohesh; Chuangsuwanich, Thanadet; Yan, Alvan Tsz Chung; Schmetterer, Leopold; Girard, Michael J. A.; Pramanik, Manojit

    2018-02-01

    In order to understand the pathophysiology of glaucoma, Lamina cribrosa (LC) perfusion needs to be the subject of thorough investigation. It is currently difficult to obtain high resolution images of the embedded microcapillary network of the LC using conventional imaging techniques. In this study, an optical resolution photoacoustic microscopy (OR-PAM) system was used for imaging lamina cribrosa of an ex vivo porcine eye. Extrinsic contrast agent was used to perfuse the eye via its ciliary arteries. The OR-PAM system have a lateral resolution of 4 μm and an axial resolution of 30 μm. The high resolution system could able resolve a perfused LC microcapillary network to show vascular structure within the LC thickness. OR-PAM could be a promising imaging modality to study the LC perfusion and hence could be used to elucidate the hemodynamic aspect of glaucoma.

  13. Detection of neuron membranes in electron microscopy images using a serial neural network architecture.

    PubMed

    Jurrus, Elizabeth; Paiva, Antonio R C; Watanabe, Shigeki; Anderson, James R; Jones, Bryan W; Whitaker, Ross T; Jorgensen, Erik M; Marc, Robert E; Tasdizen, Tolga

    2010-12-01

    Study of nervous systems via the connectome, the map of connectivities of all neurons in that system, is a challenging problem in neuroscience. Towards this goal, neurobiologists are acquiring large electron microscopy datasets. However, the shear volume of these datasets renders manual analysis infeasible. Hence, automated image analysis methods are required for reconstructing the connectome from these very large image collections. Segmentation of neurons in these images, an essential step of the reconstruction pipeline, is challenging because of noise, anisotropic shapes and brightness, and the presence of confounding structures. The method described in this paper uses a series of artificial neural networks (ANNs) in a framework combined with a feature vector that is composed of image intensities sampled over a stencil neighborhood. Several ANNs are applied in series allowing each ANN to use the classification context provided by the previous network to improve detection accuracy. We develop the method of serial ANNs and show that the learned context does improve detection over traditional ANNs. We also demonstrate advantages over previous membrane detection methods. The results are a significant step towards an automated system for the reconstruction of the connectome. Copyright 2010 Elsevier B.V. All rights reserved.

  14. Implementation of an RBF neural network on embedded systems: real-time face tracking and identity verification.

    PubMed

    Yang, Fan; Paindavoine, M

    2003-01-01

    This paper describes a real time vision system that allows us to localize faces in video sequences and verify their identity. These processes are image processing techniques based on the radial basis function (RBF) neural network approach. The robustness of this system has been evaluated quantitatively on eight video sequences. We have adapted our model for an application of face recognition using the Olivetti Research Laboratory (ORL), Cambridge, UK, database so as to compare the performance against other systems. We also describe three hardware implementations of our model on embedded systems based on the field programmable gate array (FPGA), zero instruction set computer (ZISC) chips, and digital signal processor (DSP) TMS320C62, respectively. We analyze the algorithm complexity and present results of hardware implementations in terms of the resources used and processing speed. The success rates of face tracking and identity verification are 92% (FPGA), 85% (ZISC), and 98.2% (DSP), respectively. For the three embedded systems, the processing speeds for images size of 288 /spl times/ 352 are 14 images/s, 25 images/s, and 4.8 images/s, respectively.

  15. DICOM image secure communications with Internet protocols IPv6 and IPv4.

    PubMed

    Zhang, Jianguo; Yu, Fenghai; Sun, Jianyong; Yang, Yuanyuan; Liang, Chenwen

    2007-01-01

    Image-data transmission from one site to another through public network is usually characterized in term of privacy, authenticity, and integrity. In this paper, we first describe a general scenario about how image is delivered from one site to another through a wide-area network (WAN) with security features of data privacy, integrity, and authenticity. Second, we give the common implementation method of the digital imaging and communication in medicine (DICOM) image communication software library with IPv6/IPv4 for high-speed broadband Internet by using open-source software. Third, we discuss two major security-transmission methods, the IP security (IPSec) and the secure-socket layer (SSL) or transport-layer security (TLS), being used currently in medical-image-data communication with privacy support. Fourth, we describe a test schema of multiple-modality DICOM-image communications through TCP/IPv4 and TCP/IPv6 with different security methods, different security algorithms, and operating systems, and evaluate the test results. We found that there are tradeoff factors between choosing the IPsec and the SSL/TLS-based security implementation of IPv6/IPv4 protocols. If the WAN networks only use IPv6 such as in high-speed broadband Internet, the choice is IPsec-based security. If the networks are IPv4 or the combination of IPv6 and IPv4, it is better to use SSL/TLS security. The Linux platform has more security algorithms implemented than the Windows (XP) platform, and can achieve better performance in most experiments of IPv6 and IPv4-based DICOM-image communications. In teleradiology or enterprise-PACS applications, the Linux operating system may be the better choice as peer security gateways for both the IPsec and the SSL/TLS-based secure DICOM communications cross public networks.

  16. Software for hyperspectral, joint photographic experts group (.JPG), portable network graphics (.PNG) and tagged image file format (.TIFF) segmentation

    NASA Astrophysics Data System (ADS)

    Bruno, L. S.; Rodrigo, B. P.; Lucio, A. de C. Jorge

    2016-10-01

    This paper presents a system developed by an application of a neural network Multilayer Perceptron for drone acquired agricultural image segmentation. This application allows a supervised user training the classes that will posteriorly be interpreted by neural network. These classes will be generated manually with pre-selected attributes in the application. After the attribute selection a segmentation process is made to allow the relevant information extraction for different types of images, RGB or Hyperspectral. The application allows extracting the geographical coordinates from the image metadata, geo referencing all pixels on the image. In spite of excessive memory consume on hyperspectral images regions of interest, is possible to perform segmentation, using bands chosen by user that can be combined in different ways to obtain different results.

  17. A telecommunications journey rural health network.

    PubMed

    Moore, Joe

    2012-01-01

    Utilizing a multi-gigabit statewide fiber healthcare network, Radiology Consultants of Iowa (RCI) set out to provide instantaneous service to their rural, critical access, hospital partners. RCIs idea was to assemble a collection of technologies and services that would even out workflow, reduce time on the road, and provide superior service. These technologies included PACS, voice recognition enabled dictation, HL7 interface technology, an imaging system for digitizing paper and prior films, and modern communication networks. The Iowa Rural Health Telecommunication Project was undertaken to form a system that all critical access hospitals would participate in, allowing RCI radiologists the efficiency of "any image, anywhere, anytime".

  18. First Steps Toward Incorporating Image Based Diagnostics Into Particle Accelerator Control Systems Using Convolutional Neural Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edelen, A. L.; Biedron, S. G.; Milton, S. V.

    At present, a variety of image-based diagnostics are used in particle accelerator systems. Often times, these are viewed by a human operator who then makes appropriate adjustments to the machine. Given recent advances in using convolutional neural networks (CNNs) for image processing, it should be possible to use image diagnostics directly in control routines (NN-based or otherwise). This is especially appealing for non-intercepting diagnostics that could run continuously during beam operation. Here, we show results of a first step toward implementing such a controller: our trained CNN can predict multiple simulated downstream beam parameters at the Fermilab Accelerator Science andmore » Technology (FAST) facility's low energy beamline using simulated virtual cathode laser images, gun phases, and solenoid strengths.« less

  19. 3D printed biomimetic vascular phantoms for assessment of hyperspectral imaging systems

    NASA Astrophysics Data System (ADS)

    Wang, Jianting; Ghassemi, Pejhman; Melchiorri, Anthony; Ramella-Roman, Jessica; Mathews, Scott A.; Coburn, James; Sorg, Brian; Chen, Yu; Pfefer, Joshua

    2015-03-01

    The emerging technique of three-dimensional (3D) printing provides a revolutionary way to fabricate objects with biologically realistic geometries. Previously we have performed optical and morphological characterization of basic 3D printed tissue-simulating phantoms and found them suitable for use in evaluating biophotonic imaging systems. In this study we assess the potential for printing phantoms with irregular, image-defined vascular networks that can be used to provide clinically-relevant insights into device performance. A previously acquired fundus camera image of the human retina was segmented, embedded into a 3D matrix, edited to incorporate the tubular shape of vessels and converted into a digital format suitable for printing. A polymer with biologically realistic optical properties was identified by spectrophotometer measurements of several commercially available samples. Phantoms were printed with the retinal vascular network reproduced as ~1.0 mm diameter channels at a range of depths up to ~3 mm. The morphology of the printed vessels was verified by volumetric imaging with μ-CT. Channels were filled with hemoglobin solutions at controlled oxygenation levels, and the phantoms were imaged by a near-infrared hyperspectral reflectance imaging system. The effect of vessel depth on hemoglobin saturation estimates was studied. Additionally, a phantom incorporating the vascular network at two depths was printed and filled with hemoglobin solution at two different saturation levels. Overall, results indicated that 3D printed phantoms are useful for assessing biophotonic system performance and have the potential to form the basis of clinically-relevant standardized test methods for assessment of medical imaging modalities.

  20. ICC '86; Proceedings of the International Conference on Communications, Toronto, Canada, June 22-25, 1986, Conference Record. Volumes 1, 2, & 3

    NASA Astrophysics Data System (ADS)

    Papers are presented on ISDN, mobile radio systems and techniques for digital connectivity, centralized and distributed algorithms in computer networks, communications networks, quality assurance and impact on cost, adaptive filters in communications, the spread spectrum, signal processing, video communication techniques, and digital satellite services. Topics discussed include performance evaluation issues for integrated protocols, packet network operations, the computer network theory and multiple-access, microwave single sideband systems, switching architectures, fiber optic systems, wireless local communications, modulation, coding, and synchronization, remote switching, software quality, transmission, and expert systems in network operations. Consideration is given to wide area networks, image and speech processing, office communications application protocols, multimedia systems, customer-controlled network operations, digital radio systems, channel modeling and signal processing in digital communications, earth station/on-board modems, computer communications system performance evaluation, source encoding, compression, and quantization, and adaptive communications systems.

  1. Image understanding systems based on the unifying representation of perceptual and conceptual information and the solution of mid-level and high-level vision problems

    NASA Astrophysics Data System (ADS)

    Kuvychko, Igor

    2001-10-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. A computer vision system based on such principles requires unifying representation of perceptual and conceptual information. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/networks models is found. That means a very important shift of paradigm in our knowledge about brain from neural networks to the cortical software. Starting from the primary visual areas, brain analyzes an image as a graph-type spatial structure. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. The spatial combination of different neighbor features cannot be described as a statistical/integral characteristic of the analyzed region, but uniquely characterizes such region itself. Spatial logic and topology naturally present in such structures. Mid-level vision processes like clustering, perceptual grouping, multilevel hierarchical compression, separation of figure from ground, etc. are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena like shape from shading, occlusion, etc. are results of such analysis. Such approach gives opportunity not only to explain frequently unexplainable results of the cognitive science, but also to create intelligent computer vision systems that simulate perceptional processes in both what and where visual pathways. Such systems can open new horizons for robotic and computer vision industries.

  2. Salience network dynamics underlying successful resistance of temptation

    PubMed Central

    Nomi, Jason S; Calhoun, Vince D; Stelzel, Christine; Paschke, Lena M; Gaschler, Robert; Goschke, Thomas; Walter, Henrik; Uddin, Lucina Q

    2017-01-01

    Abstract Self-control and the ability to resist temptation are critical for successful completion of long-term goals. Contemporary models in cognitive neuroscience emphasize the primary role of prefrontal cognitive control networks in aligning behavior with such goals. Here, we use gaze pattern analysis and dynamic functional connectivity fMRI data to explore how individual differences in the ability to resist temptation are related to intrinsic brain dynamics of the cognitive control and salience networks. Behaviorally, individuals exhibit greater gaze distance from target location (e.g. higher distractibility) during presentation of tempting erotic images compared with neutral images. Individuals whose intrinsic dynamic functional connectivity patterns gravitate toward configurations in which salience detection systems are less strongly coupled with visual systems resist tempting distractors more effectively. The ability to resist tempting distractors was not significantly related to intrinsic dynamics of the cognitive control network. These results suggest that susceptibility to temptation is governed in part by individual differences in salience network dynamics and provide novel evidence for involvement of brain systems outside canonical cognitive control networks in contributing to individual differences in self-control. PMID:29048582

  3. Hybrid multiphoton volumetric functional imaging of large-scale bioengineered neuronal networks

    NASA Astrophysics Data System (ADS)

    Dana, Hod; Marom, Anat; Paluch, Shir; Dvorkin, Roman; Brosh, Inbar; Shoham, Shy

    2014-06-01

    Planar neural networks and interfaces serve as versatile in vitro models of central nervous system physiology, but adaptations of related methods to three dimensions (3D) have met with limited success. Here, we demonstrate for the first time volumetric functional imaging in a bioengineered neural tissue growing in a transparent hydrogel with cortical cellular and synaptic densities, by introducing complementary new developments in nonlinear microscopy and neural tissue engineering. Our system uses a novel hybrid multiphoton microscope design combining a 3D scanning-line temporal-focusing subsystem and a conventional laser-scanning multiphoton microscope to provide functional and structural volumetric imaging capabilities: dense microscopic 3D sampling at tens of volumes per second of structures with mm-scale dimensions containing a network of over 1,000 developing cells with complex spontaneous activity patterns. These developments open new opportunities for large-scale neuronal interfacing and for applications of 3D engineered networks ranging from basic neuroscience to the screening of neuroactive substances.

  4. Effective Multifocus Image Fusion Based on HVS and BP Neural Network

    PubMed Central

    Yang, Yong

    2014-01-01

    The aim of multifocus image fusion is to fuse the images taken from the same scene with different focuses to obtain a resultant image with all objects in focus. In this paper, a novel multifocus image fusion method based on human visual system (HVS) and back propagation (BP) neural network is presented. Three features which reflect the clarity of a pixel are firstly extracted and used to train a BP neural network to determine which pixel is clearer. The clearer pixels are then used to construct the initial fused image. Thirdly, the focused regions are detected by measuring the similarity between the source images and the initial fused image followed by morphological opening and closing operations. Finally, the final fused image is obtained by a fusion rule for those focused regions. Experimental results show that the proposed method can provide better performance and outperform several existing popular fusion methods in terms of both objective and subjective evaluations. PMID:24683327

  5. A novel biomedical image indexing and retrieval system via deep preference learning.

    PubMed

    Pang, Shuchao; Orgun, Mehmet A; Yu, Zhezhou

    2018-05-01

    The traditional biomedical image retrieval methods as well as content-based image retrieval (CBIR) methods originally designed for non-biomedical images either only consider using pixel and low-level features to describe an image or use deep features to describe images but still leave a lot of room for improving both accuracy and efficiency. In this work, we propose a new approach, which exploits deep learning technology to extract the high-level and compact features from biomedical images. The deep feature extraction process leverages multiple hidden layers to capture substantial feature structures of high-resolution images and represent them at different levels of abstraction, leading to an improved performance for indexing and retrieval of biomedical images. We exploit the current popular and multi-layered deep neural networks, namely, stacked denoising autoencoders (SDAE) and convolutional neural networks (CNN) to represent the discriminative features of biomedical images by transferring the feature representations and parameters of pre-trained deep neural networks from another domain. Moreover, in order to index all the images for finding the similarly referenced images, we also introduce preference learning technology to train and learn a kind of a preference model for the query image, which can output the similarity ranking list of images from a biomedical image database. To the best of our knowledge, this paper introduces preference learning technology for the first time into biomedical image retrieval. We evaluate the performance of two powerful algorithms based on our proposed system and compare them with those of popular biomedical image indexing approaches and existing regular image retrieval methods with detailed experiments over several well-known public biomedical image databases. Based on different criteria for the evaluation of retrieval performance, experimental results demonstrate that our proposed algorithms outperform the state-of-the-art techniques in indexing biomedical images. We propose a novel and automated indexing system based on deep preference learning to characterize biomedical images for developing computer aided diagnosis (CAD) systems in healthcare. Our proposed system shows an outstanding indexing ability and high efficiency for biomedical image retrieval applications and it can be used to collect and annotate the high-resolution images in a biomedical database for further biomedical image research and applications. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Quasi-real-time telemedical checkup system for x-ray examination of UGI tract based on high-speed network

    NASA Astrophysics Data System (ADS)

    Sakano, Toshikazu; Yamaguchi, Takahiro; Fujii, Tatsuya; Okumura, Akira; Furukawa, Isao; Ono, Sadayasu; Suzuki, Junji; Ando, Yutaka; Kohda, Ehiichi; Sugino, Yoshinori; Okada, Yoshiyuki; Amaki, Sachi

    2000-05-01

    We constructed a high-speed medical information network testbed, which is one of the largest testbeds in Japan, and applied it to practical medical checkups for the first time. The constructed testbed, which we call IMPACT, consists of a Super-High Definition Imaging system, a video conferencing system, a remote database system, and a 6 - 135 Mbps ATM network. The interconnected facilities include the School of Medicine in Keio University, a company's clinic, and an NTT R&D center, all in and around Tokyo. We applied IMPACT to the mass screening of the upper gastrointestinal (UGI) tract at the clinic. All 5419 radiographic images acquired at them clinic for 523 employees were digitized (2048 X 1698 X 12 bits) and transferred to a remote database in NTT. We then picked up about 50 images from five patients and sent them to nine radiological specialists at Keio University. The processing, which includes film digitization, image data transfer, and database registration, took 574 seconds per patient in average. The average reading time at Keio Univ. was 207 seconds. The overall processing time was estimated to be 781 seconds per patient. From these experimental results, we conclude that quasi-real time tele-medical checkups are possible with our prototype system.

  7. Path planning on cellular nonlinear network using active wave computing technique

    NASA Astrophysics Data System (ADS)

    Yeniçeri, Ramazan; Yalçın, Müstak E.

    2009-05-01

    This paper introduces a simple algorithm to solve robot path finding problem using active wave computing techniques. A two-dimensional Cellular Neural/Nonlinear Network (CNN), consist of relaxation oscillators, has been used to generate active waves and to process the visual information. The network, which has been implemented on a Field Programmable Gate Array (FPGA) chip, has the feature of being programmed, controlled and observed by a host computer. The arena of the robot is modelled as the medium of the active waves on the network. Active waves are employed to cover the whole medium with their own dynamics, by starting from an initial point. The proposed algorithm is achieved by observing the motion of the wave-front of the active waves. Host program first loads the arena model onto the active wave generator network and command to start the generation. Then periodically pulls the network image from the generator hardware to analyze evolution of the active waves. When the algorithm is completed, vectorial data image is generated. The path from any of the pixel on this image to the active wave generating pixel is drawn by the vectors on this image. The robot arena may be a complicated labyrinth or may have a simple geometry. But, the arena surface always must be flat. Our Autowave Generator CNN implementation which is settled on the Xilinx University Program Virtex-II Pro Development System is operated by a MATLAB program running on the host computer. As the active wave generator hardware has 16, 384 neurons, an arena with 128 × 128 pixels can be modeled and solved by the algorithm. The system also has a monitor and network image is depicted on the monitor simultaneously.

  8. Filmless PACS in a multiple facility environment

    NASA Astrophysics Data System (ADS)

    Wilson, Dennis L.; Glicksman, Robert A.; Prior, Fred W.; Siu, Kai-Yeung; Goldburgh, Mitchell M.

    1996-05-01

    A Picture Archiving and Communication System centered on a shared image file server can support a filmless hospital. Systems based on this architecture have proven themselves in over four years of clinical operation. Changes in healthcare delivery are causing radiology groups to support multiple facilities for remote clinic support and consolidation of services. There will be a corresponding need for communicating over a standardized wide area network (WAN). Interactive workflow, a natural extension to the single facility case, requires a means to work effectively and seamlessly across moderate to low speed communication networks. Several schemes for supporting a consortium of medical treatment facilities over a WAN are explored. Both centralized and distributed database approaches are evaluated against several WAN scenarios. Likewise, several architectures for distributing image file servers or buffers over a WAN are explored, along with the caching and distribution strategies that support them. An open system implementation is critical to the success of a wide area system. The role of the Digital Imaging and Communications in Medicine (DICOM) standard in supporting multi- facility and multi-vendor open systems is also addressed. An open system can be achieved by using a DICOM server to provide a view of the system-wide distributed database. The DICOM server interface to a local version of the global database lets a local workstation treat the multiple, distributed data servers as though they were one local server for purposes of examination queries. The query will recover information about the examination that will permit retrieval over the network from the server on which the examination resides. For efficiency reasons, the ability to build cross-facility radiologist worklists and clinician-oriented patient folders is essential. The technologies of the World-Wide-Web can be used to generate worklists and patient folders across facilities. A reliable broadcast protocol may be a convenient way to notify many different users and many image servers about new activities in the network of image servers. In addition to ensuring reliability of message delivery and global serialization of each broadcast message in the network, the broadcast protocol should not introduce significant communication overhead.

  9. A web-based system for neural network based classification in temporomandibular joint osteoarthritis.

    PubMed

    de Dumast, Priscille; Mirabel, Clément; Cevidanes, Lucia; Ruellas, Antonio; Yatabe, Marilia; Ioshida, Marcos; Ribera, Nina Tubau; Michoud, Loic; Gomes, Liliane; Huang, Chao; Zhu, Hongtu; Muniz, Luciana; Shoukri, Brandon; Paniagua, Beatriz; Styner, Martin; Pieper, Steve; Budin, Francois; Vimort, Jean-Baptiste; Pascal, Laura; Prieto, Juan Carlos

    2018-07-01

    The purpose of this study is to describe the methodological innovations of a web-based system for storage, integration and computation of biomedical data, using a training imaging dataset to remotely compute a deep neural network classifier of temporomandibular joint osteoarthritis (TMJOA). This study imaging dataset consisted of three-dimensional (3D) surface meshes of mandibular condyles constructed from cone beam computed tomography (CBCT) scans. The training dataset consisted of 259 condyles, 105 from control subjects and 154 from patients with diagnosis of TMJ OA. For the image analysis classification, 34 right and left condyles from 17 patients (39.9 ± 11.7 years), who experienced signs and symptoms of the disease for less than 5 years, were included as the testing dataset. For the integrative statistical model of clinical, biological and imaging markers, the sample consisted of the same 17 test OA subjects and 17 age and sex matched control subjects (39.4 ± 15.4 years), who did not show any sign or symptom of OA. For these 34 subjects, a standardized clinical questionnaire, blood and saliva samples were also collected. The technological methodologies in this study include a deep neural network classifier of 3D condylar morphology (ShapeVariationAnalyzer, SVA), and a flexible web-based system for data storage, computation and integration (DSCI) of high dimensional imaging, clinical, and biological data. The DSCI system trained and tested the neural network, indicating 5 stages of structural degenerative changes in condylar morphology in the TMJ with 91% close agreement between the clinician consensus and the SVA classifier. The DSCI remotely ran with a novel application of a statistical analysis, the Multivariate Functional Shape Data Analysis, that computed high dimensional correlations between shape 3D coordinates, clinical pain levels and levels of biological markers, and then graphically displayed the computation results. The findings of this study demonstrate a comprehensive phenotypic characterization of TMJ health and disease at clinical, imaging and biological levels, using novel flexible and versatile open-source tools for a web-based system that provides advanced shape statistical analysis and a neural network based classification of temporomandibular joint osteoarthritis. Published by Elsevier Ltd.

  10. Artificial Neural Network applied to lightning flashes

    NASA Astrophysics Data System (ADS)

    Gin, R. B.; Guedes, D.; Bianchi, R.

    2013-05-01

    The development of video cameras enabled cientists to study lightning discharges comportment with more precision. The main goal of this project is to create a system able to detect images of lightning discharges stored in videos and classify them using an Artificial Neural Network (ANN)using C Language and OpenCV libraries. The developed system, can be split in two different modules: detection module and classification module. The detection module uses OpenCV`s computer vision libraries and image processing techniques to detect if there are significant differences between frames in a sequence, indicating that something, still not classified, occurred. Whenever there is a significant difference between two consecutive frames, two main algorithms are used to analyze the frame image: brightness and shape algorithms. These algorithms detect both shape and brightness of the event, removing irrelevant events like birds, as well as detecting the relevant events exact position, allowing the system to track it over time. The classification module uses a neural network to classify the relevant events as horizontal or vertical lightning, save the event`s images and calculates his number of discharges. The Neural Network was implemented using the backpropagation algorithm, and was trained with 42 training images , containing 57 lightning events (one image can have more than one lightning). TheANN was tested with one to five hidden layers, with up to 50 neurons each. The best configuration achieved a success rate of 95%, with one layer containing 20 neurons (33 test images with 42 events were used in this phase). This configuration was implemented in the developed system to analyze 20 video files, containing 63 lightning discharges previously manually detected. Results showed that all the lightning discharges were detected, many irrelevant events were unconsidered, and the event's number of discharges was correctly computed. The neural network used in this project achieved a success rate of 90%. The videos used in this experiment were acquired by seven video cameras installed in São Bernardo do Campo, Brazil, that continuously recorded lightning events during the summer. The cameras were disposed in a 360 loop, recording all data at a time resolution of 33ms. During this period, several convective storms were recorded.

  11. NRL Fact Book 2010

    DTIC Science & Technology

    2010-01-01

    service) High assurance software Distributed network-based battle management High performance computing supporting uniform and nonuniform memory...VNIR, MWIR, and LWIR high-resolution systems Wideband SAR systems RF and laser data links High-speed, high-power photodetector characteriza- tion...Antimonide (InSb) imaging system Long-wave infrared ( LWIR ) quantum well IR photodetector (QWIP) imaging system Research and Development Services

  12. A complex network approach for nanoparticle agglomeration analysis in nanoscale images

    NASA Astrophysics Data System (ADS)

    Machado, Bruno Brandoli; Scabini, Leonardo Felipe; Margarido Orue, Jonatan Patrick; de Arruda, Mauro Santos; Goncalves, Diogo Nunes; Goncalves, Wesley Nunes; Moreira, Raphaell; Rodrigues-Jr, Jose F.

    2017-02-01

    Complex networks have been widely used in science and technology because of their ability to represent several systems. One of these systems is found in Biochemistry, in which the synthesis of new nanoparticles is a hot topic. However, the interpretation of experimental results in the search of new nanoparticles poses several challenges. This is due to the characteristics of nanoparticle images and due to their multiple intricate properties; one property of recurrent interest is the agglomeration of particles. Addressing this issue, this paper introduces an approach that uses complex networks to detect and describe nanoparticle agglomerates so to foster easier and more insightful analyses. In this approach, each detected particle in an image corresponds to a vertice and the distances between the particles define a criterion for creating edges. Edges are created if the distance is smaller than a radius of interest. Once this network is set, we calculate several discrete measures able to reveal the most outstanding agglomerates in a nanoparticle image. Experimental results using images of scanning tunneling microscopy (STM) of gold nanoparticles demonstrated the effectiveness of the proposed approach over several samples, as reflected by the separability between particles in three usual settings. The results also demonstrated efficacy for both convex and non-convex agglomerates.

  13. KAMEDIN: a telemedicine system for computer supported cooperative work and remote image analysis in radiology.

    PubMed

    Handels, H; Busch, C; Encarnação, J; Hahn, C; Kühn, V; Miehe, J; Pöppl, S I; Rinast, E; Rossmanith, C; Seibert, F; Will, A

    1997-03-01

    The software system KAMEDIN (Kooperatives Arbeiten und MEdizinische Diagnostik auf Innovativen Netzen) is a multimedia telemedicine system for exchange, cooperative diagnostics, and remote analysis of digital medical image data. It provides components for visualisation, processing, and synchronised audio-visual discussion of medical images. Techniques of computer supported cooperative work (CSCW) synchronise user interactions during a teleconference. Visibility of both local and remote cursor on the conference workstations facilitates telepointing and reinforces the conference partner's telepresence. Audio communication during teleconferences is supported by an integrated audio component. Furthermore, brain tissue segmentation with artificial neural networks can be performed on an external supercomputer as a remote image analysis procedure. KAMEDIN is designed as a low cost CSCW tool for ISDN based telecommunication. However it can be used on any TCP/IP supporting network. In a field test, KAMEDIN was installed in 15 clinics and medical departments to validate the systems' usability. The telemedicine system KAMEDIN has been developed, tested, and evaluated within a research project sponsored by German Telekom.

  14. Cascaded Segmentation-Detection Networks for Word-Level Text Spotting.

    PubMed

    Qin, Siyang; Manduchi, Roberto

    2017-11-01

    We introduce an algorithm for word-level text spotting that is able to accurately and reliably determine the bounding regions of individual words of text "in the wild". Our system is formed by the cascade of two convolutional neural networks. The first network is fully convolutional and is in charge of detecting areas containing text. This results in a very reliable but possibly inaccurate segmentation of the input image. The second network (inspired by the popular YOLO architecture) analyzes each segment produced in the first stage, and predicts oriented rectangular regions containing individual words. No post-processing (e.g. text line grouping) is necessary. With execution time of 450 ms for a 1000 × 560 image on a Titan X GPU, our system achieves good performance on the ICDAR 2013, 2015 benchmarks [2], [1].

  15. Naval Research Laboratory Fact Book 2012

    DTIC Science & Technology

    2012-11-01

    Distributed network-based battle management High performance computing supporting uniform and nonuniform memory access with single and multithreaded...hyperspectral systems VNIR, MWIR, and LWIR high-resolution systems Wideband SAR systems RF and laser data links High-speed, high-power...hyperspectral imaging system Long-wave infrared ( LWIR ) quantum well IR photodetector (QWIP) imaging system Research and Development Services Divi- sion

  16. The analysis of transient noise of PCB P/G network based on PI/SI co-simulation

    NASA Astrophysics Data System (ADS)

    Haohang, Su

    2018-02-01

    With the frequency of the space camera become higher than before, the power noise of the imaging electronic system become the important factor. Much more power noise would disturb the transmissions signal, and even influence the image sharpness and system noise. "Target impedance method" is one of the traditional design method of P/G network (power and ground network), which is shorted of transient power noise analysis and often made "over design". In this paper, a new design method of P/G network is provided which simulated by PI/SI co-simulation. The transient power noise can be simulated and then applied in the design of noise reduction, thus effectively controlling the change of the noise in the P/G network. The method can efficiently control the number of adding decoupling capacitor, and is very efficient and feasible to keep the power integrity.

  17. Real-time optimizations for integrated smart network camera

    NASA Astrophysics Data System (ADS)

    Desurmont, Xavier; Lienard, Bruno; Meessen, Jerome; Delaigle, Jean-Francois

    2005-02-01

    We present an integrated real-time smart network camera. This system is composed of an image sensor, an embedded PC based electronic card for image processing and some network capabilities. The application detects events of interest in visual scenes, highlights alarms and computes statistics. The system also produces meta-data information that could be shared between other cameras in a network. We describe the requirements of such a system and then show how the design of the system is optimized to process and compress video in real-time. Indeed, typical video-surveillance algorithms as background differencing, tracking and event detection should be highly optimized and simplified to be used in this hardware. To have a good adequation between hardware and software in this light embedded system, the software management is written on top of the java based middle-ware specification established by the OSGi alliance. We can integrate easily software and hardware in complex environments thanks to the Java Real-Time specification for the virtual machine and some network and service oriented java specifications (like RMI and Jini). Finally, we will report some outcomes and typical case studies of such a camera like counter-flow detection.

  18. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    PubMed Central

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  19. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    PubMed

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  20. Driver drowsiness detection using ANN image processing

    NASA Astrophysics Data System (ADS)

    Vesselenyi, T.; Moca, S.; Rus, A.; Mitran, T.; Tătaru, B.

    2017-10-01

    The paper presents a study regarding the possibility to develop a drowsiness detection system for car drivers based on three types of methods: EEG and EOG signal processing and driver image analysis. In previous works the authors have described the researches on the first two methods. In this paper the authors have studied the possibility to detect the drowsy or alert state of the driver based on the images taken during driving and by analyzing the state of the driver’s eyes: opened, half-opened and closed. For this purpose two kinds of artificial neural networks were employed: a 1 hidden layer network and an autoencoder network.

  1. A neural network ActiveX based integrated image processing environment.

    PubMed

    Ciuca, I; Jitaru, E; Alaicescu, M; Moisil, I

    2000-01-01

    The paper outlines an integrated image processing environment that uses neural networks ActiveX technology for object recognition and classification. The image processing environment which is Windows based, encapsulates a Multiple-Document Interface (MDI) and is menu driven. Object (shape) parameter extraction is focused on features that are invariant in terms of translation, rotation and scale transformations. The neural network models that can be incorporated as ActiveX components into the environment allow both clustering and classification of objects from the analysed image. Mapping neural networks perform an input sensitivity analysis on the extracted feature measurements and thus facilitate the removal of irrelevant features and improvements in the degree of generalisation. The program has been used to evaluate the dimensions of the hydrocephalus in a study for calculating the Evans index and the angle of the frontal horns of the ventricular system modifications.

  2. Deep learning in the small sample size setting: cascaded feed forward neural networks for medical image segmentation

    NASA Astrophysics Data System (ADS)

    Gaonkar, Bilwaj; Hovda, David; Martin, Neil; Macyszyn, Luke

    2016-03-01

    Deep Learning, refers to large set of neural network based algorithms, have emerged as promising machine- learning tools in the general imaging and computer vision domains. Convolutional neural networks (CNNs), a specific class of deep learning algorithms, have been extremely effective in object recognition and localization in natural images. A characteristic feature of CNNs, is the use of a locally connected multi layer topology that is inspired by the animal visual cortex (the most powerful vision system in existence). While CNNs, perform admirably in object identification and localization tasks, typically require training on extremely large datasets. Unfortunately, in medical image analysis, large datasets are either unavailable or are extremely expensive to obtain. Further, the primary tasks in medical imaging are organ identification and segmentation from 3D scans, which are different from the standard computer vision tasks of object recognition. Thus, in order to translate the advantages of deep learning to medical image analysis, there is a need to develop deep network topologies and training methodologies, that are geared towards medical imaging related tasks and can work in a setting where dataset sizes are relatively small. In this paper, we present a technique for stacked supervised training of deep feed forward neural networks for segmenting organs from medical scans. Each `neural network layer' in the stack is trained to identify a sub region of the original image, that contains the organ of interest. By layering several such stacks together a very deep neural network is constructed. Such a network can be used to identify extremely small regions of interest in extremely large images, inspite of a lack of clear contrast in the signal or easily identifiable shape characteristics. What is even more intriguing is that the network stack achieves accurate segmentation even when it is trained on a single image with manually labelled ground truth. We validate this approach,using a publicly available head and neck CT dataset. We also show that a deep neural network of similar depth, if trained directly using backpropagation, cannot acheive the tasks achieved using our layer wise training paradigm.

  3. Digital implementation of a neural network for imaging

    NASA Astrophysics Data System (ADS)

    Wood, Richard; McGlashan, Alex; Yatulis, Jay; Mascher, Peter; Bruce, Ian

    2012-10-01

    This paper outlines the design and testing of a digital imaging system that utilizes an artificial neural network with unsupervised and supervised learning to convert streaming input (real time) image space into parameter space. The primary objective of this work is to investigate the effectiveness of using a neural network to significantly reduce the information density of streaming images so that objects can be readily identified by a limited set of primary parameters and act as an enhanced human machine interface (HMI). Many applications are envisioned including use in biomedical imaging, anomaly detection and as an assistive device for the visually impaired. A digital circuit was designed and tested using a Field Programmable Gate Array (FPGA) and an off the shelf digital camera. Our results indicate that the networks can be readily trained when subject to limited sets of objects such as the alphabet. We can also separate limited object sets with rotational and positional invariance. The results also show that limited visual fields form with only local connectivity.

  4. Dynamical System Approach for Edge Detection Using Coupled FitzHugh-Nagumo Neurons.

    PubMed

    Li, Shaobai; Dasmahapatra, Srinandan; Maharatna, Koushik

    2015-12-01

    The prospect of emulating the impressive computational capabilities of biological systems has led to considerable interest in the design of analog circuits that are potentially implementable in very large scale integration CMOS technology and are guided by biologically motivated models. For example, simple image processing tasks, such as the detection of edges in binary and grayscale images, have been performed by networks of FitzHugh-Nagumo-type neurons using the reaction-diffusion models. However, in these studies, the one-to-one mapping of image pixels to component neurons makes the size of the network a critical factor in any such implementation. In this paper, we develop a simplified version of the employed reaction-diffusion model in three steps. In the first step, we perform a detailed study to locate this threshold using continuous Lyapunov exponents from dynamical system theory. Furthermore, we render the diffusion in the system to be anisotropic, with the degree of anisotropy being set by the gradients of grayscale values in each image. The final step involves a simplification of the model that is achieved by eliminating the terms that couple the membrane potentials of adjacent neurons. We apply our technique to detect edges in data sets of artificially generated and real images, and we demonstrate that the performance is as good if not better than that of the previous methods without increasing the size of the network.

  5. Video networking of cardiac catheterization laboratories.

    PubMed

    Tobis, J; Aharonian, V; Mansukhani, P; Kasaoka, S; Jhandyala, R; Son, R; Browning, R; Youngblood, L; Thompson, M

    1999-02-01

    The purpose of this study was to assess the feasibility and accuracy of a video telecommunication network to transmit coronary images to provide on-line interaction between personnel in a cardiac catheterization laboratory and a remote core laboratory. A telecommunication system was installed in the cardiac catheterization laboratory at Kaiser Hospital, Los Angeles, and the core laboratory at the University of California, Irvine, approximately 40 miles away. Cineangiograms, live fluoroscopy, intravascular ultrasound studies and images of the catheterization laboratory were transmitted in real time over a dedicated T1 line at 768 kilobytes/second at 15 frames/second. These cases were performed during a clinical study of angiographic guidance versus intravascular ultrasound (IVUS) guidance of stent deployment. During the cases the core laboratory performed quantitative analysis of the angiograms and ultrasound images. Selected images were then annotated and transmitted back to the catheterization laboratory to facilitate discussion during the procedure. A successful communication hookup was obtained in 39 (98%) of 40 cases. Measurements of angiographic parameters were very close between the original cinefilm and the transmitted images. Quantitative analysis of the ultrasound images showed no significant difference in any of the diameter or cross-sectional area measurements between the original ultrasound tape and the transmitted images. The telecommunication link during the interventional procedures had a significant impact in 23 (58%) of 40 cases affecting the area to be treated, the size of the inflation balloon, recognition of stent underdeployment, or the existence of disease in other areas that was not noted on the original studies. Current video telecommunication systems provide high-quality images on-line with accurate representation of cineangiograms and intravascular ultrasound images. This system had a significant impact on 58% of the cases in this small clinical trial. Telecommunication networks between hospitals and a central core laboratory may facilitate physician training and improve technical skills and judgement during interventional procedures. This project has implications for how multicenter clinical trials could be operated through telecommunication networks to ensure conformity with the protocol.

  6. A System for Video Surveillance and Monitoring CMU VSAM Final Report

    DTIC Science & Technology

    1999-11-30

    motion-based skeletonization, neural network , spatio-temporal salience Patterns inside image chips, spurious motion rejection, model -based... network of sensors with respect to the model coordinate system, computation of 3D geolocation estimates, and graphical display of object hypotheses...rithms have been developed. The first uses view dependent visual properties to train a neural network classifier to recognize four classes: single

  7. The SSABLE system - Automated archive, catalog, browse and distribution of satellite data in near-real time

    NASA Technical Reports Server (NTRS)

    Simpson, James J.; Harkins, Daniel N.

    1993-01-01

    Historically, locating and browsing satellite data has been a cumbersome and expensive process. This has impeded the efficient and effective use of satellite data in the geosciences. SSABLE is a new interactive tool for the archive, browse, order, and distribution of satellite date based upon X Window, high bandwidth networks, and digital image rendering techniques. SSABLE provides for automatically constructing relational database queries to archived image datasets based on time, data, geographical location, and other selection criteria. SSABLE also provides a visual representation of the selected archived data for viewing on the user's X terminal. SSABLE is a near real-time system; for example, data are added to SSABLE's database within 10 min after capture. SSABLE is network and machine independent; it will run identically on any machine which satisfies the following three requirements: 1) has a bitmapped display (monochrome or greater); 2) is running the X Window system; and 3) is on a network directly reachable by the SSABLE system. SSABLE has been evaluated at over 100 international sites. Network response time in the United States and Canada varies between 4 and 7 s for browse image updates; reported transmission times to Europe and Australia typically are 20-25 s.

  8. Validation of organ procurement and transplant network (OPTN)/united network for organ sharing (UNOS) criteria for imaging diagnosis of hepatocellular carcinoma.

    PubMed

    Fowler, Kathryn J; Karimova, E Jane; Arauz, Anthony R; Saad, Nael E; Brunt, Elizabeth M; Chapman, William C; Heiken, Jay P

    2013-06-27

    Imaging diagnosis of hepatocellular carcinoma (HCC) presents an important pathway for transplant exception points and priority for cirrhotic patients. The purpose of this retrospective study is to evaluate the validity of the new Organ Procurement and Transplant Network (OPTN) classification system on patients undergoing transplantation for HCC. One hundred twenty-nine patients underwent transplantation for HCC from April 14, 2006 to April 18, 2011; a total of 263 lesions were reported as suspicious for HCC on pretransplantation magnetic resonance imaging. Magnetic resonance imaging examinations were reviewed independently by two experienced radiologists, blinded to final pathology. Reviewers identified major imaging features and an OPTN classification was assigned to each lesion. Final proof of diagnosis was pathology on explant or necrosis along with imaging findings of ablation after transarterial chemoembolization. Application of OPTN imaging criteria in our population resulted in high specificity for the diagnosis of HCC. Sensitivity in diagnosis of small lesions (≥1 and <2 cm) was low (range, 26%-34%). Use of the OPTN system would have resulted in different management in 17% of our population who had received automatic exception points for HCC based on preoperative imaging but would not have met criteria under the new system. Eleven percent of the patients not meeting OPTN criteria were found to have T2 stage tumor burden on pathology. The OPTN imaging policy introduces a high level of specificity for HCC but may decrease sensitivity for small lesions. Management may be impacted in a number of patients, potentially requiring longer surveillance periods or biopsy to confirm diagnosis.

  9. Variable disparity estimation based intermediate view reconstruction in dynamic flow allocation over EPON-based access networks

    NASA Astrophysics Data System (ADS)

    Bae, Kyung-Hoon; Lee, Jungjoon; Kim, Eun-Soo

    2008-06-01

    In this paper, a variable disparity estimation (VDE)-based intermediate view reconstruction (IVR) in dynamic flow allocation (DFA) over an Ethernet passive optical network (EPON)-based access network is proposed. In the proposed system, the stereoscopic images are estimated by a variable block-matching algorithm (VBMA), and they are transmitted to the receiver through DFA over EPON. This scheme improves a priority-based access network by converting it to a flow-based access network with a new access mechanism and scheduling algorithm, and then 16-view images are synthesized by the IVR using VDE. Some experimental results indicate that the proposed system improves the peak-signal-to-noise ratio (PSNR) to as high as 4.86 dB and reduces the processing time to 3.52 s. Additionally, the network service provider can provide upper limits of transmission delays by the flow. The modeling and simulation results, including mathematical analyses, from this scheme are also provided.

  10. Tissue classification and segmentation of pressure injuries using convolutional neural networks.

    PubMed

    Zahia, Sofia; Sierra-Sosa, Daniel; Garcia-Zapirain, Begonya; Elmaghraby, Adel

    2018-06-01

    This paper presents a new approach for automatic tissue classification in pressure injuries. These wounds are localized skin damages which need frequent diagnosis and treatment. Therefore, a reliable and accurate systems for segmentation and tissue type identification are needed in order to achieve better treatment results. Our proposed system is based on a Convolutional Neural Network (CNN) devoted to performing optimized segmentation of the different tissue types present in pressure injuries (granulation, slough, and necrotic tissues). A preprocessing step removes the flash light and creates a set of 5x5 sub-images which are used as input for the CNN network. The network output will classify every sub-image of the validation set into one of the three classes studied. The metrics used to evaluate our approach show an overall average classification accuracy of 92.01%, an average total weighted Dice Similarity Coefficient of 91.38%, and an average precision per class of 97.31% for granulation tissue, 96.59% for necrotic tissue, and 77.90% for slough tissue. Our system has been proven to make recognition of complicated structures in biomedical images feasible. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Beyond the word and image: characteristics of a common meaning system for language and vision revealed by functional and structural imaging.

    PubMed

    Jouen, A L; Ellmore, T M; Madden, C J; Pallier, C; Dominey, P F; Ventre-Dominey, J

    2015-02-01

    This research tests the hypothesis that comprehension of human events will engage an extended semantic representation system, independent of the input modality (sentence vs. picture). To investigate this, we examined brain activation and connectivity in 19 subjects who read sentences and viewed pictures depicting everyday events, in a combined fMRI and DTI study. Conjunction of activity in understanding sentences and pictures revealed a common fronto-temporo-parietal network that included the middle and inferior frontal gyri, the parahippocampal-retrosplenial complex, the anterior and middle temporal gyri, the inferior parietal lobe in particular the temporo-parietal cortex. DTI tractography seeded from this temporo-parietal cortex hub revealed a multi-component network reaching into the temporal pole, the ventral frontal pole and premotor cortex. A significant correlation was found between the relative pathway density issued from the temporo-parietal cortex and the imageability of sentences for individual subjects, suggesting a potential functional link between comprehension and the temporo-parietal connectivity strength. These data help to define a "meaning" network that includes components of recently characterized systems for semantic memory, embodied simulation, and visuo-spatial scene representation. The network substantially overlaps with the "default mode" network implicated as part of a core network of semantic representation, along with brain systems related to the formation of mental models, and reasoning. These data are consistent with a model of real-world situational understanding that is highly embodied. Crucially, the neural basis of this embodied understanding is not limited to sensorimotor systems, but extends to the highest levels of cognition, including autobiographical memory, scene analysis, mental model formation, reasoning and theory of mind. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Analysis of structure-function network decoupling in the brain systems of spastic diplegic cerebral palsy.

    PubMed

    Lee, Dongha; Pae, Chongwon; Lee, Jong Doo; Park, Eun Sook; Cho, Sung-Rae; Um, Min-Hee; Lee, Seung-Koo; Oh, Maeng-Keun; Park, Hae-Jeong

    2017-10-01

    Manifestation of the functionalities from the structural brain network is becoming increasingly important to understand a brain disease. With the aim of investigating the differential structure-function couplings according to network systems, we investigated the structural and functional brain networks of patients with spastic diplegic cerebral palsy with periventricular leukomalacia compared to healthy controls. The structural and functional networks of the whole brain and motor system, constructed using deterministic and probabilistic tractography of diffusion tensor magnetic resonance images and Pearson and partial correlation analyses of resting-state functional magnetic resonance images, showed differential embedding of functional networks in the structural networks in patients. In the whole-brain network of patients, significantly reduced global network efficiency compared to healthy controls were found in the structural networks but not in the functional networks, resulting in reduced structural-functional coupling. On the contrary, the motor network of patients had a significantly lower functional network efficiency over the intact structural network and a lower structure-function coupling than the control group. This reduced coupling but reverse directionality in the whole-brain and motor networks of patients was prominent particularly between the probabilistic structural and partial correlation-based functional networks. Intact (or less deficient) functional network over impaired structural networks of the whole brain and highly impaired functional network topology over the intact structural motor network might subserve relatively preserved cognitions and impaired motor functions in cerebral palsy. This study suggests that the structure-function relationship, evaluated specifically using sparse functional connectivity, may reveal important clues to functional reorganization in cerebral palsy. Hum Brain Mapp 38:5292-5306, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  13. Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service.

    PubMed

    Bao, Shunxing; Plassard, Andrew J; Landman, Bennett A; Gokhale, Aniruddha

    2017-04-01

    Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based "medical image processing-as-a-service" offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop's distributed file system. Despite this promise, HBase's load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split policy, and nearly a six-fold improvement over a commonly available network file system (NFS) approach even for relatively small file sets. Moreover, file access latency is lower than network attached storage.

  14. Potential fault region detection in TFDS images based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Sun, Junhua; Xiao, Zhongwen

    2016-10-01

    In recent years, more than 300 sets of Trouble of Running Freight Train Detection System (TFDS) have been installed on railway to monitor the safety of running freight trains in China. However, TFDS is simply responsible for capturing, transmitting, and storing images, and fails to recognize faults automatically due to some difficulties such as such as the diversity and complexity of faults and some low quality images. To improve the performance of automatic fault recognition, it is of great importance to locate the potential fault areas. In this paper, we first introduce a convolutional neural network (CNN) model to TFDS and propose a potential fault region detection system (PFRDS) for simultaneously detecting four typical types of potential fault regions (PFRs). The experimental results show that this system has a higher performance of image detection to PFRs in TFDS. An average detection recall of 98.95% and precision of 100% are obtained, demonstrating the high detection ability and robustness against various poor imaging situations.

  15. Feasibility of telemammography as biomedical application for breast imaging

    NASA Astrophysics Data System (ADS)

    Beckerman, Barbara G.; Batsell, Stephen G.; MacIntyre, Lawrence P.; Sarraf, Hamed S.; Gleason, Shaun S.; Schnall, Mitchell D.

    1999-07-01

    Mammographic screening is an important tool in the early detection of breast cancer. The migration of mammography from the current mode of x-ray mammography using a film screen image detector and display to a digital technology provides an opportunity to improve access and performance of breast cancer screening. The sheer size and volume of the typical screening exam, the need to have previous screening data readily available, and the need to view other breast imaging data together to provide a common consensus and to plan treatment, make telemammography an ideal application for breast imaging. For telemammography to be a viable option, it must overcome the technical challenges related to transmission, archiving, management, processing and retrieval of large data sets. Researchers from the University of Pennsylvania, the University of Chicago and Lockheed Martin Energy Systems/Oak Ridge National Laboratory have developed a framework for transmission of large-scale medical images over high-speed networks, leveraged existing high-speed networks between research and medical facilities; tested the feasibility of point-to-point transmission of mammographic images in a near-real time environment; evaluated network performance and transmission scenarios; and investigated the impact of image preprocessing on an experimental computer-aided diagnosis system. Results of the initial study are reported here.

  16. Digital image classification with the help of artificial neural network by simple histogram.

    PubMed

    Dey, Pranab; Banerjee, Nirmalya; Kaur, Rajwant

    2016-01-01

    Visual image classification is a great challenge to the cytopathologist in routine day-to-day work. Artificial neural network (ANN) may be helpful in this matter. In this study, we have tried to classify digital images of malignant and benign cells in effusion cytology smear with the help of simple histogram data and ANN. A total of 404 digital images consisting of 168 benign cells and 236 malignant cells were selected for this study. The simple histogram data was extracted from these digital images and an ANN was constructed with the help of Neurointelligence software [Alyuda Neurointelligence 2.2 (577), Cupertino, California, USA]. The network architecture was 6-3-1. The images were classified as training set (281), validation set (63), and test set (60). The on-line backpropagation training algorithm was used for this study. A total of 10,000 iterations were done to train the ANN system with the speed of 609.81/s. After the adequate training of this ANN model, the system was able to identify all 34 malignant cell images and 24 out of 26 benign cells. The ANN model can be used for the identification of the individual malignant cells with the help of simple histogram data. This study will be helpful in the future to identify malignant cells in unknown situations.

  17. Integrated clinical workstations for image and text data capture, display, and teleconsultation.

    PubMed

    Dayhoff, R; Kuzmak, P M; Kirin, G

    1994-01-01

    The Department of Veterans Affairs (VA) DHCP Imaging System digitally records clinically significant diagnostic images selected by medical specialists in a variety of hospital departments, including radiology, cardiology, gastroenterology, pathology, dermatology, hematology, surgery, podiatry, dental clinic, and emergency room. These images, which include true color and gray scale images, scanned documents, and electrocardiogram waveforms, are stored on network file servers and displayed on workstations located throughout a medical center. All images are managed by the VA's hospital information system (HIS), allowing integrated displays of text and image data from all medical specialties. Two VA medical centers currently have DHCP Imaging Systems installed, and other installations are underway.

  18. Biomorphic networks: approach to invariant feature extraction and segmentation for ATR

    NASA Astrophysics Data System (ADS)

    Baek, Andrew; Farhat, Nabil H.

    1998-10-01

    Invariant features in two dimensional binary images are extracted in a single layer network of locally coupled spiking (pulsating) model neurons with prescribed synapto-dendritic response. The feature vector for an image is represented as invariant structure in the aggregate histogram of interspike intervals obtained by computing time intervals between successive spikes produced from each neuron over a given period of time and combining such intervals from all neurons in the network into a histogram. Simulation results show that the feature vectors are more pattern-specific and invariant under translation, rotation, and change in scale or intensity than achieved in earlier work. We also describe an application of such networks to segmentation of line (edge-enhanced or silhouette) images. The biomorphic spiking network's capabilities in segmentation and invariant feature extraction may prove to be, when they are combined, valuable in Automated Target Recognition (ATR) and other automated object recognition systems.

  19. Optimization of an interactive distributive computer network

    NASA Technical Reports Server (NTRS)

    Frederick, V.

    1985-01-01

    The activities under a cooperative agreement for the development of a computer network are briefly summarized. Research activities covered are: computer operating systems optimization and integration; software development and implementation of the IRIS (Infrared Imaging of Shuttle) Experiment; and software design, development, and implementation of the APS (Aerosol Particle System) Experiment.

  20. Hyperspectral imaging utility for transportation systems

    NASA Astrophysics Data System (ADS)

    Bridgelall, Raj; Rafert, J. Bruce; Tolliver, Denver

    2015-03-01

    The global transportation system is massive, open, and dynamic. Existing performance and condition assessments of the complex interacting networks of roadways, bridges, railroads, pipelines, waterways, airways, and intermodal ports are expensive. Hyperspectral imaging is an emerging remote sensing technique for the non-destructive evaluation of multimodal transportation infrastructure. Unlike panchromatic, color, and infrared imaging, each layer of a hyperspectral image pixel records reflectance intensity from one of dozens or hundreds of relatively narrow wavelength bands that span a broad range of the electromagnetic spectrum. Hence, every pixel of a hyperspectral scene provides a unique spectral signature that offers new opportunities for informed decision-making in transportation systems development, operations, and maintenance. Spaceborne systems capture images of vast areas in a short period but provide lower spatial resolution than airborne systems. Practitioners use manned aircraft to achieve higher spatial and spectral resolution, but at the price of custom missions and narrow focus. The rapid size and cost reduction of unmanned aircraft systems promise a third alternative that offers hybrid benefits at affordable prices by conducting multiple parallel missions. This research formulates a theoretical framework for a pushbroom type of hyperspectral imaging system on each type of data acquisition platform. The study then applies the framework to assess the relative potential utility of hyperspectral imaging for previously proposed remote sensing applications in transportation. The authors also introduce and suggest new potential applications of hyperspectral imaging in transportation asset management, network performance evaluation, and risk assessments to enable effective and objective decision- and policy-making.

  1. Sustained synchronized neuronal network activity in a human astrocyte co-culture system

    PubMed Central

    Kuijlaars, Jacobine; Oyelami, Tutu; Diels, Annick; Rohrbacher, Jutta; Versweyveld, Sofie; Meneghello, Giulia; Tuefferd, Marianne; Verstraelen, Peter; Detrez, Jan R.; Verschuuren, Marlies; De Vos, Winnok H.; Meert, Theo; Peeters, Pieter J.; Cik, Miroslav; Nuydens, Rony; Brône, Bert; Verheyen, An

    2016-01-01

    Impaired neuronal network function is a hallmark of neurodevelopmental and neurodegenerative disorders such as autism, schizophrenia, and Alzheimer’s disease and is typically studied using genetically modified cellular and animal models. Weak predictive capacity and poor translational value of these models urge for better human derived in vitro models. The implementation of human induced pluripotent stem cells (hiPSCs) allows studying pathologies in differentiated disease-relevant and patient-derived neuronal cells. However, the differentiation process and growth conditions of hiPSC-derived neurons are non-trivial. In order to study neuronal network formation and (mal)function in a fully humanized system, we have established an in vitro co-culture model of hiPSC-derived cortical neurons and human primary astrocytes that recapitulates neuronal network synchronization and connectivity within three to four weeks after final plating. Live cell calcium imaging, electrophysiology and high content image analyses revealed an increased maturation of network functionality and synchronicity over time for co-cultures compared to neuronal monocultures. The cells express GABAergic and glutamatergic markers and respond to inhibitors of both neurotransmitter pathways in a functional assay. The combination of this co-culture model with quantitative imaging of network morphofunction is amenable to high throughput screening for lead discovery and drug optimization for neurological diseases. PMID:27819315

  2. Detection, location, and quantification of structural damage by neural-net-processed moiré profilometry

    NASA Astrophysics Data System (ADS)

    Grossman, Barry G.; Gonzalez, Frank S.; Blatt, Joel H.; Hooker, Jeffery A.

    1992-03-01

    The development of efficient high speed techniques to recognize, locate, and quantify damage is vitally important for successful automated inspection systems such as ones used for the inspection of undersea pipelines. Two critical problems must be solved to achieve these goals: the reduction of nonuseful information present in the video image and automatic recognition and quantification of extent and location of damage. Artificial neural network processed moire profilometry appears to be a promising technique to accomplish this. Real time video moire techniques have been developed which clearly distinguish damaged and undamaged areas on structures, thus reducing the amount of extraneous information input into an inspection system. Artificial neural networks have demonstrated advantages for image processing, since they can learn the desired response to a given input and are inherently fast when implemented in hardware due to their parallel computing architecture. Video moire images of pipes with dents of different depths were used to train a neural network, with the desired output being the location and severity of the damage. The system was then successfully tested with a second series of moire images. The techniques employed and the results obtained are discussed.

  3. ACOT Classroom Networks: Today and Tomorrow. ACOT Report #5.

    ERIC Educational Resources Information Center

    Knapp, Linda

    The Apple Classrooms of Tomorrow (ACOT) research project provides classroom sites with equipment, ongoing support, and training, enabling educators to discover the potential of networked learning environments. ACOT networks link together technology from Apple IIe computers and Image Writer printers, to Macintosh II systems, synthesizers, laserdisc…

  4. How We Get Pictures from Space. NASA Facts (Revised Edition).

    ERIC Educational Resources Information Center

    Haynes, Robert

    This booklet discusses image processing from spacecraft in deep space. The camera system on board the spacecraft, the Deep Space Network (DSN), and the image processing system are described. A table listing photographs taken by unmanned spacecraft from 1959-1977 is provided. (YP)

  5. MILCOM '85 - Military Communications Conference, Boston, MA, October 20-23, 1985, Conference Record. Volumes 1, 2, & 3

    NASA Astrophysics Data System (ADS)

    The present conference on the development status of communications systems in the context of electronic warfare gives attention to topics in spread spectrum code acquisition, digital speech technology, fiber-optics communications, free space optical communications, the networking of HF systems, and applications and evaluation methods for digital speech. Also treated are issues in local area network system design, coding techniques and applications, technology applications for HF systems, receiver technologies, software development status, channel simultion/prediction methods, C3 networking spread spectrum networks, the improvement of communication efficiency and reliability through technical control methods, mobile radio systems, and adaptive antenna arrays. Finally, communications system cost analyses, spread spectrum performance, voice and image coding, switched networks, and microwave GaAs ICs, are considered.

  6. Semantic Segmentation and Unregistered Building Detection from Uav Images Using a Deconvolutional Network

    NASA Astrophysics Data System (ADS)

    Ham, S.; Oh, Y.; Choi, K.; Lee, I.

    2018-05-01

    Detecting unregistered buildings from aerial images is an important task for urban management such as inspection of illegal buildings in green belt or update of GIS database. Moreover, the data acquisition platform of photogrammetry is evolving from manned aircraft to UAVs (Unmanned Aerial Vehicles). However, it is very costly and time-consuming to detect unregistered buildings from UAV images since the interpretation of aerial images still relies on manual efforts. To overcome this problem, we propose a system which automatically detects unregistered buildings from UAV images based on deep learning methods. Specifically, we train a deconvolutional network with publicly opened geospatial data, semantically segment a given UAV image into a building probability map and compare the building map with existing GIS data. Through this procedure, we could detect unregistered buildings from UAV images automatically and efficiently. We expect that the proposed system can be applied for various urban management tasks such as monitoring illegal buildings or illegal land-use change.

  7. Challenges in sending large radiology images over military communications channels

    NASA Astrophysics Data System (ADS)

    Cleary, Kevin R.; Levine, Betty A.; Norton, Gary S.; Mundur, Padmavathi V.

    1997-05-01

    In cooperation with the US Army, Georgetown University Medical Center (GUMC) deployed a teleradiology network to sites in Bosnia-Herzegovina, Hungary, and Germany in early 1996. This deployment was part of Operation Primetime III, a military project to provide state-of-the-art medical care to the 20,000 US troops stationed in Bosnia-Herzegovina.In a three-month time frame from January to April 1996, the Imaging Sciences and Information Systems (ISIS) Center at GUMC worked with the Army to design, develop, and deploy a teleradiology network for the digital storage and transmission of radiology images. This paper will discuss some of the problems associated with sending large files over communications networks with significant delays such as those introduced by satellite transmissions.Radiology images of up to 10 megabytes are acquired, stored, and transmitted over the wide area network (WAN). The WAN included leased lines from Germany to Hungary and a satellite link form Germany to Bosnia-Herzegovina. The communications links provided at least a T-1 bandwidth. The satellite link introduces a round-trip delay of approximately 500 milliseconds. This type of high bandwidth, high delay network is called a long fat network. The images are transferred across this network using the Transmission Control Protocol (TCP/IP). By modifying the TCP/IP software to increase the window size, the throughput of the satellite link can be greatly improved.

  8. Artificial neural network-aided image analysis system for cell counting.

    PubMed

    Sjöström, P J; Frydel, B R; Wahlberg, L U

    1999-05-01

    In histological preparations containing debris and synthetic materials, it is difficult to automate cell counting using standard image analysis tools, i.e., systems that rely on boundary contours, histogram thresholding, etc. In an attempt to mimic manual cell recognition, an automated cell counter was constructed using a combination of artificial intelligence and standard image analysis methods. Artificial neural network (ANN) methods were applied on digitized microscopy fields without pre-ANN feature extraction. A three-layer feed-forward network with extensive weight sharing in the first hidden layer was employed and trained on 1,830 examples using the error back-propagation algorithm on a Power Macintosh 7300/180 desktop computer. The optimal number of hidden neurons was determined and the trained system was validated by comparison with blinded human counts. System performance at 50x and lO0x magnification was evaluated. The correlation index at 100x magnification neared person-to-person variability, while 50x magnification was not useful. The system was approximately six times faster than an experienced human. ANN-based automated cell counting in noisy histological preparations is feasible. Consistent histology and computer power are crucial for system performance. The system provides several benefits, such as speed of analysis and consistency, and frees up personnel for other tasks.

  9. Flame analysis using image processing techniques

    NASA Astrophysics Data System (ADS)

    Her Jie, Albert Chang; Zamli, Ahmad Faizal Ahmad; Zulazlan Shah Zulkifli, Ahmad; Yee, Joanne Lim Mun; Lim, Mooktzeng

    2018-04-01

    This paper presents image processing techniques with the use of fuzzy logic and neural network approach to perform flame analysis. Flame diagnostic is important in the industry to extract relevant information from flame images. Experiment test is carried out in a model industrial burner with different flow rates. Flame features such as luminous and spectral parameters are extracted using image processing and Fast Fourier Transform (FFT). Flame images are acquired using FLIR infrared camera. Non-linearities such as thermal acoustic oscillations and background noise affect the stability of flame. Flame velocity is one of the important characteristics that determines stability of flame. In this paper, an image processing method is proposed to determine flame velocity. Power spectral density (PSD) graph is a good tool for vibration analysis where flame stability can be approximated. However, a more intelligent diagnostic system is needed to automatically determine flame stability. In this paper, flame features of different flow rates are compared and analyzed. The selected flame features are used as inputs to the proposed fuzzy inference system to determine flame stability. Neural network is used to test the performance of the fuzzy inference system.

  10. Unified Lunar Control Network 2005 and Topographic Model

    NASA Technical Reports Server (NTRS)

    Archinal, B. A.; Rosiek, M. R.; Redding, B. L.

    2005-01-01

    There are currently two generally accepted lunar control networks. These are the Unified Lunar Control Network (ULCN) and the Clementine Lunar Control Network (CLCN), both derived by M. Davies and T. Colvin at RAND. We address here our efforts to merge and improve these networks into a new ULCN. The ULCN was described in the last major publication about a lunar control network. The statistics on this and the other networks discussed here. Images for this network are from the Apollo, Mariner 10, and Galileo missions, and Earth-based photographs. The importance of this network is that its accuracy is relatively well quantified and published information on the network is available. The CLCN includes measurements on 43,871 Clementine 750-nm images - the largest planetary control network ever computed. This purpose of this network was to determine the geometry for the Clementine Basemap Mosiac (CBM). The geometry of that mosaic was used to produce the Clementine UVVIS digital image model and the Near-Infrared Global Multispectral Map of the Moon from Clementine. Through the extensive use of these products, they and the underlying CLCN in effect define the generally accepted current coordinate system for reporting and describing the location of lunar coordinates. However, no publication describes the CLCN itself.

  11. Intelligent distributed medical image management

    NASA Astrophysics Data System (ADS)

    Garcia, Hong-Mei C.; Yun, David Y.

    1995-05-01

    The rapid advancements in high performance global communication have accelerated cooperative image-based medical services to a new frontier. Traditional image-based medical services such as radiology and diagnostic consultation can now fully utilize multimedia technologies in order to provide novel services, including remote cooperative medical triage, distributed virtual simulation of operations, as well as cross-country collaborative medical research and training. Fast (efficient) and easy (flexible) retrieval of relevant images remains a critical requirement for the provision of remote medical services. This paper describes the database system requirements, identifies technological building blocks for meeting the requirements, and presents a system architecture for our target image database system, MISSION-DBS, which has been designed to fulfill the goals of Project MISSION (medical imaging support via satellite integrated optical network) -- an experimental high performance gigabit satellite communication network with access to remote supercomputing power, medical image databases, and 3D visualization capabilities in addition to medical expertise anywhere and anytime around the country. The MISSION-DBS design employs a synergistic fusion of techniques in distributed databases (DDB) and artificial intelligence (AI) for storing, migrating, accessing, and exploring images. The efficient storage and retrieval of voluminous image information is achieved by integrating DDB modeling and AI techniques for image processing while the flexible retrieval mechanisms are accomplished by combining attribute- based and content-based retrievals.

  12. Social computing for image matching

    PubMed Central

    Rivas, Alberto; Sánchez-Torres, Ramiro; Rodríguez, Sara

    2018-01-01

    One of the main technological trends in the last five years is mass data analysis. This trend is due in part to the emergence of concepts such as social networks, which generate a large volume of data that can provide added value through their analysis. This article is focused on a business and employment-oriented social network. More specifically, it focuses on the analysis of information provided by different users in image form. The images are analyzed to detect whether other existing users have posted or talked about the same image, even if the image has undergone some type of modification such as watermarks or color filters. This makes it possible to establish new connections among unknown users by detecting what they are posting or whether they are talking about the same images. The proposed solution consists of an image matching algorithm, which is based on the rapid calculation and comparison of hashes. However, there is a computationally expensive aspect in charge of revoking possible image transformations. As a result, the image matching process is supported by a distributed forecasting system that enables or disables nodes to serve all the possible requests. The proposed system has shown promising results for matching modified images, especially when compared with other existing systems. PMID:29813082

  13. Convolutional neural networks for transient candidate vetting in large-scale surveys

    NASA Astrophysics Data System (ADS)

    Gieseke, Fabian; Bloemen, Steven; van den Bogaard, Cas; Heskes, Tom; Kindler, Jonas; Scalzo, Richard A.; Ribeiro, Valério A. R. M.; van Roestel, Jan; Groot, Paul J.; Yuan, Fang; Möller, Anais; Tucker, Brad E.

    2017-12-01

    Current synoptic sky surveys monitor large areas of the sky to find variable and transient astronomical sources. As the number of detections per night at a single telescope easily exceeds several thousand, current detection pipelines make intensive use of machine learning algorithms to classify the detected objects and to filter out the most interesting candidates. A number of upcoming surveys will produce up to three orders of magnitude more data, which renders high-precision classification systems essential to reduce the manual and, hence, expensive vetting by human experts. We present an approach based on convolutional neural networks to discriminate between true astrophysical sources and artefacts in reference-subtracted optical images. We show that relatively simple networks are already competitive with state-of-the-art systems and that their quality can further be improved via slightly deeper networks and additional pre-processing steps - eventually yielding models outperforming state-of-the-art systems. In particular, our best model correctly classifies about 97.3 per cent of all 'real' and 99.7 per cent of all 'bogus' instances on a test set containing 1942 'bogus' and 227 'real' instances in total. Furthermore, the networks considered in this work can also successfully classify these objects at hand without relying on difference images, which might pave the way for future detection pipelines not containing image subtraction steps at all.

  14. High resolution depth reconstruction from monocular images and sparse point clouds using deep convolutional neural network

    NASA Astrophysics Data System (ADS)

    Dimitrievski, Martin; Goossens, Bart; Veelaert, Peter; Philips, Wilfried

    2017-09-01

    Understanding the 3D structure of the environment is advantageous for many tasks in the field of robotics and autonomous vehicles. From the robot's point of view, 3D perception is often formulated as a depth image reconstruction problem. In the literature, dense depth images are often recovered deterministically from stereo image disparities. Other systems use an expensive LiDAR sensor to produce accurate, but semi-sparse depth images. With the advent of deep learning there have also been attempts to estimate depth by only using monocular images. In this paper we combine the best of the two worlds, focusing on a combination of monocular images and low cost LiDAR point clouds. We explore the idea that very sparse depth information accurately captures the global scene structure while variations in image patches can be used to reconstruct local depth to a high resolution. The main contribution of this paper is a supervised learning depth reconstruction system based on a deep convolutional neural network. The network is trained on RGB image patches reinforced with sparse depth information and the output is a depth estimate for each pixel. Using image and point cloud data from the KITTI vision dataset we are able to learn a correspondence between local RGB information and local depth, while at the same time preserving the global scene structure. Our results are evaluated on sequences from the KITTI dataset and our own recordings using a low cost camera and LiDAR setup.

  15. Multisite two-photon imaging of neurons on multielectrode arrays

    NASA Astrophysics Data System (ADS)

    Potter, Steve M.; Lukina, Natalia; Longmuir, Kenneth J.; Wu, Yan

    2001-04-01

    We wish to understand how neural systems store, recall, and process information. We are using cultured networks of cortical neurons grown on microelectrode arrays as a model system for studying the emergent properties of ensembles of living neurons. We have developed a 2-way communication interface between the cultured network and a computer- generated animal, the Neurally Controlled Animat. Neural activity is used to control the behavior of the Animat, and 2- photon time-lapse imaging is carried out in order to observe the morphological changes that might underlie changes in neural processing. The 2-photon microscope is ideal for repeated imaging over hours or days, with submicron resolution and little photodamage. We have designed a computer-controlled microscope stage that allows imaging several locations in sequence, in order to collect more image data. For the latest progress, see: http://www.caltech.edu/~pinelab/PotterGroup.htm.

  16. ID card number detection algorithm based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Zhu, Jian; Ma, Hanjie; Feng, Jie; Dai, Leiyan

    2018-04-01

    In this paper, a new detection algorithm based on Convolutional Neural Network is presented in order to realize the fast and convenient ID information extraction in multiple scenarios. The algorithm uses the mobile device equipped with Android operating system to locate and extract the ID number; Use the special color distribution of the ID card, select the appropriate channel component; Use the image threshold segmentation, noise processing and morphological processing to take the binary processing for image; At the same time, the image rotation and projection method are used for horizontal correction when image was tilting; Finally, the single character is extracted by the projection method, and recognized by using Convolutional Neural Network. Through test shows that, A single ID number image from the extraction to the identification time is about 80ms, the accuracy rate is about 99%, It can be applied to the actual production and living environment.

  17. Medical image informatics infrastructure design and applications.

    PubMed

    Huang, H K; Wong, S T; Pietka, E

    1997-01-01

    Picture archiving and communication systems (PACS) is a system integration of multimodality images and health information systems designed for improving the operation of a radiology department. As it evolves, PACS becomes a hospital image document management system with a voluminous image and related data file repository. A medical image informatics infrastructure can be designed to take advantage of existing data, providing PACS with add-on value for health care service, research, and education. A medical image informatics infrastructure (MIII) consists of the following components: medical images and associated data (including PACS database), image processing, data/knowledge base management, visualization, graphic user interface, communication networking, and application oriented software. This paper describes these components and their logical connection, and illustrates some applications based on the concept of the MIII.

  18. Pneumothorax detection in chest radiographs using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Blumenfeld, Aviel; Konen, Eli; Greenspan, Hayit

    2018-02-01

    This study presents a computer assisted diagnosis system for the detection of pneumothorax (PTX) in chest radiographs based on a convolutional neural network (CNN) for pixel classification. Using a pixel classification approach allows utilization of the texture information in the local environment of each pixel while training a CNN model on millions of training patches extracted from a relatively small dataset. The proposed system uses a pre-processing step of lung field segmentation to overcome the large variability in the input images coming from a variety of imaging sources and protocols. Using a CNN classification, suspected pixel candidates are extracted within each lung segment. A postprocessing step follows to remove non-physiological suspected regions and noisy connected components. The overall percentage of suspected PTX area was used as a robust global decision for the presence of PTX in each lung. The system was trained on a set of 117 chest x-ray images with ground truth segmentations of the PTX regions. The system was tested on a set of 86 images and reached diagnosis accuracy of AUC=0.95. Overall preliminary results are promising and indicate the growing ability of CAD based systems to detect findings in medical imaging on a clinical level accuracy.

  19. Merlot Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahern, S D

    2003-06-10

    We describe Merlot, a system for delivery of digital imagery over high speed networks. We describe various use cases, the client/server interaction, and the image and network codecs. We also describe some possible applications using Merlot and future work.

  20. [Development and evaluation of the medical imaging distribution system with dynamic web application and clustering technology].

    PubMed

    Yokohama, Noriya; Tsuchimoto, Tadashi; Oishi, Masamichi; Itou, Katsuya

    2007-01-20

    It has been noted that the downtime of medical informatics systems is often long. Many systems encounter downtimes of hours or even days, which can have a critical effect on daily operations. Such systems remain especially weak in the areas of database and medical imaging data. The scheme design shows the three-layer architecture of the system: application, database, and storage layers. The application layer uses the DICOM protocol (Digital Imaging and Communication in Medicine) and HTTP (Hyper Text Transport Protocol) with AJAX (Asynchronous JavaScript+XML). The database is designed to decentralize in parallel using cluster technology. Consequently, restoration of the database can be done not only with ease but also with improved retrieval speed. In the storage layer, a network RAID (Redundant Array of Independent Disks) system, it is possible to construct exabyte-scale parallel file systems that exploit storage spread. Development and evaluation of the test-bed has been successful in medical information data backup and recovery in a network environment. This paper presents a schematic design of the new medical informatics system that can be accommodated from a recovery and the dynamic Web application for medical imaging distribution using AJAX.

  1. Low-Speed Fingerprint Image Capture System User`s Guide, June 1, 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitus, B.R.; Goddard, J.S.; Jatko, W.B.

    1993-06-01

    The Low-Speed Fingerprint Image Capture System (LS-FICS) uses a Sun workstation controlling a Lenzar ElectroOptics Opacity 1000 imaging system to digitize fingerprint card images to support the Federal Bureau of Investigation`s (FBI`s) Automated Fingerprint Identification System (AFIS) program. The system also supports the operations performed by the Oak Ridge National Laboratory- (ORNL-) developed Image Transmission Network (ITN) prototype card scanning system. The input to the system is a single FBI fingerprint card of the agreed-upon standard format and a user-specified identification number. The output is a file formatted to be compatible with the National Institute of Standards and Technology (NIST)more » draft standard for fingerprint data exchange dated June 10, 1992. These NIST compatible files contain the required print and text images. The LS-FICS is designed to provide the FBI with the capability of scanning fingerprint cards into a digital format. The FBI will replicate the system to generate a data base of test images. The Host Workstation contains the image data paths and the compression algorithm. A local area network interface, disk storage, and tape drive are used for the image storage and retrieval, and the Lenzar Opacity 1000 scanner is used to acquire the image. The scanner is capable of resolving 500 pixels/in. in both x and y directions. The print images are maintained in full 8-bit gray scale and compressed with an FBI-approved wavelet-based compression algorithm. The text fields are downsampled to 250 pixels/in. and 2-bit gray scale. The text images are then compressed using a lossless Huffman coding scheme. The text fields retrieved from the output files are easily interpreted when displayed on the screen. Detailed procedures are provided for system calibration and operation. Software tools are provided to verify proper system operation.« less

  2. LFNet: A Novel Bidirectional Recurrent Convolutional Neural Network for Light-Field Image Super-Resolution.

    PubMed

    Wang, Yunlong; Liu, Fei; Zhang, Kunbo; Hou, Guangqi; Sun, Zhenan; Tan, Tieniu

    2018-09-01

    The low spatial resolution of light-field image poses significant difficulties in exploiting its advantage. To mitigate the dependency of accurate depth or disparity information as priors for light-field image super-resolution, we propose an implicitly multi-scale fusion scheme to accumulate contextual information from multiple scales for super-resolution reconstruction. The implicitly multi-scale fusion scheme is then incorporated into bidirectional recurrent convolutional neural network, which aims to iteratively model spatial relations between horizontally or vertically adjacent sub-aperture images of light-field data. Within the network, the recurrent convolutions are modified to be more effective and flexible in modeling the spatial correlations between neighboring views. A horizontal sub-network and a vertical sub-network of the same network structure are ensembled for final outputs via stacked generalization. Experimental results on synthetic and real-world data sets demonstrate that the proposed method outperforms other state-of-the-art methods by a large margin in peak signal-to-noise ratio and gray-scale structural similarity indexes, which also achieves superior quality for human visual systems. Furthermore, the proposed method can enhance the performance of light field applications such as depth estimation.

  3. Neural networks application to divergence-based passive ranging

    NASA Technical Reports Server (NTRS)

    Barniv, Yair

    1992-01-01

    The purpose of this report is to summarize the state of knowledge and outline the planned work in divergence-based/neural networks approach to the problem of passive ranging derived from optical flow. Work in this and closely related areas is reviewed in order to provide the necessary background for further developments. New ideas about devising a monocular passive-ranging system are then introduced. It is shown that image-plan divergence is independent of image-plan location with respect to the focus of expansion and of camera maneuvers because it directly measures the object's expansion which, in turn, is related to the time-to-collision. Thus, a divergence-based method has the potential of providing a reliable range complementing other monocular passive-ranging methods which encounter difficulties in image areas close to the focus of expansion. Image-plan divergence can be thought of as some spatial/temporal pattern. A neural network realization was chosen for this task because neural networks have generally performed well in various other pattern recognition applications. The main goal of this work is to teach a neural network to derive the divergence from the imagery.

  4. Computer-aided diagnosis workstation and database system for chest diagnosis based on multi-helical CT images

    NASA Astrophysics Data System (ADS)

    Satoh, Hitoshi; Niki, Noboru; Mori, Kiyoshi; Eguchi, Kenji; Kaneko, Masahiro; Kakinuma, Ryutarou; Moriyama, Noriyuki; Ohmatsu, Hironobu; Masuda, Hideo; Machida, Suguru; Sasagawa, Michizou

    2006-03-01

    Multi-helical CT scanner advanced remarkably at the speed at which the chest CT images were acquired for mass screening. Mass screening based on multi-helical CT images requires a considerable number of images to be read. It is this time-consuming step that makes the use of helical CT for mass screening impractical at present. To overcome this problem, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images and a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification. We also have developed electronic medical recording system and prototype internet system for the community health in two or more regions by using the Virtual Private Network router and Biometric fingerprint authentication system and Biometric face authentication system for safety of medical information. Based on these diagnostic assistance methods, we have now developed a new computer-aided workstation and database that can display suspected lesions three-dimensionally in a short time. This paper describes basic studies that have been conducted to evaluate this new system. The results of this study indicate that our computer-aided diagnosis workstation and network system can increase diagnostic speed, diagnostic accuracy and safety of medical information.

  5. Optimization of Close Range Photogrammetry Network Design Applying Fuzzy Computation

    NASA Astrophysics Data System (ADS)

    Aminia, A. S.

    2017-09-01

    Measuring object 3D coordinates with optimum accuracy is one of the most important issues in close range photogrammetry. In this context, network design plays an important role in determination of optimum position of imaging stations. This is, however, not a trivial task due to various geometric and radiometric constraints affecting the quality of the measurement network. As a result, most camera stations in the network are defined on a try and error basis based on the user's experience and generic network concept. In this paper, we propose a post-processing task to investigate the quality of camera positions right after image capturing to achieve the best result. To do this, a new fuzzy reasoning approach is adopted, in which the constraints affecting the network design are all modeled. As a result, the position of all camera locations is defined based on fuzzy rules and inappropriate stations are determined. The experiments carried out show that after determination and elimination of the inappropriate images using the proposed fuzzy reasoning system, the accuracy of measurements is improved and enhanced about 17% for the latter network.

  6. Advanced visualization platform for surgical operating room coordination: distributed video board system.

    PubMed

    Hu, Peter F; Xiao, Yan; Ho, Danny; Mackenzie, Colin F; Hu, Hao; Voigt, Roger; Martz, Douglas

    2006-06-01

    One of the major challenges for day-of-surgery operating room coordination is accurate and timely situation awareness. Distributed and secure real-time status information is key to addressing these challenges. This article reports on the design and implementation of a passive status monitoring system in a 19-room surgical suite of a major academic medical center. Key design requirements considered included integrated real-time operating room status display, access control, security, and network impact. The system used live operating room video images and patient vital signs obtained through monitors to automatically update events and operating room status. Images were presented on a "need-to-know" basis, and access was controlled by identification badge authorization. The system delivered reliable real-time operating room images and status with acceptable network impact. Operating room status was visualized at 4 separate locations and was used continuously by clinicians and operating room service providers to coordinate operating room activities.

  7. [The application and development of artificial intelligence in medical diagnosis systems].

    PubMed

    Chen, Zhencheng; Jiang, Yong; Xu, Mingyu; Wang, Hongyan; Jiang, Dazong

    2002-09-01

    This paper has reviewed the development of artificial intelligence in medical practice and medical diagnostic expert systems, and has summarized the application of artificial neural network. It explains that a source of difficulty in medical diagnostic system is the co-existence of multiple diseases--the potentially inter-related diseases. However, the difficulty of image expert systems is inherent in high-level vision. And it increases the complexity of expert system in medical image. At last, the prospect for the development of artificial intelligence in medical image expert systems is made.

  8. Computer-aided diagnostic system for diffuse liver diseases with ultrasonography by neural networks

    NASA Astrophysics Data System (ADS)

    Ogawa, K.; Fukushima, M.; Kubota, K.; Hisa, N.

    1998-12-01

    The aim of the study is to establish a computer-aided diagnostic system for diffuse liver diseases such as chronic active hepatitis (CAH) and liver cirrhosis (LC). The authors introduced an artificial neural network in the classification of these diseases. In this system the neural network was trained by feature parameters extracted from B-mode ultrasonic images of normal liver (NL), CAH and LC. For input data the authors used six parameters calculated by a region of interest (ROI) and a parameter calculated by five ROIs in each image. They were variance of pixel values, coefficient of variation, annular Fourier power spectrum, longitudinal Fourier power spectrum which were calculated for the ROI, and variation of the means of the five ROIs. In addition, the authors used two more parameters calculated from a co-occurrence matrix of pixel values in the ROI. The results showed that the neural network classifier was 83.8% in sensitivity for LC, 90.0% in sensitivity for CAH and 93.6% in specificity, and the system was considered to be helpful for clinical and educational use.

  9. Pulmonary Nodule Classification with Deep Convolutional Neural Networks on Computed Tomography Images.

    PubMed

    Li, Wei; Cao, Peng; Zhao, Dazhe; Wang, Junbo

    2016-01-01

    Computer aided detection (CAD) systems can assist radiologists by offering a second opinion on early diagnosis of lung cancer. Classification and feature representation play critical roles in false-positive reduction (FPR) in lung nodule CAD. We design a deep convolutional neural networks method for nodule classification, which has an advantage of autolearning representation and strong generalization ability. A specified network structure for nodule images is proposed to solve the recognition of three types of nodules, that is, solid, semisolid, and ground glass opacity (GGO). Deep convolutional neural networks are trained by 62,492 regions-of-interest (ROIs) samples including 40,772 nodules and 21,720 nonnodules from the Lung Image Database Consortium (LIDC) database. Experimental results demonstrate the effectiveness of the proposed method in terms of sensitivity and overall accuracy and that it consistently outperforms the competing methods.

  10. Finding strong lenses in CFHTLS using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Jacobs, C.; Glazebrook, K.; Collett, T.; More, A.; McCarthy, C.

    2017-10-01

    We train and apply convolutional neural networks, a machine learning technique developed to learn from and classify image data, to Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) imaging for the identification of potential strong lensing systems. An ensemble of four convolutional neural networks was trained on images of simulated galaxy-galaxy lenses. The training sets consisted of a total of 62 406 simulated lenses and 64 673 non-lens negative examples generated with two different methodologies. An ensemble of trained networks was applied to all of the 171 deg2 of the CFHTLS wide field image data, identifying 18 861 candidates including 63 known and 139 other potential lens candidates. A second search of 1.4 million early-type galaxies selected from the survey catalogue as potential deflectors, identified 2465 candidates including 117 previously known lens candidates, 29 confirmed lenses/high-quality lens candidates, 266 novel probable or potential lenses and 2097 candidates we classify as false positives. For the catalogue-based search we estimate a completeness of 21-28 per cent with respect to detectable lenses and a purity of 15 per cent, with a false-positive rate of 1 in 671 images tested. We predict a human astronomer reviewing candidates produced by the system would identify 20 probable lenses and 100 possible lenses per hour in a sample selected by the robot. Convolutional neural networks are therefore a promising tool for use in the search for lenses in current and forthcoming surveys such as the Dark Energy Survey and the Large Synoptic Survey Telescope.

  11. Active learning of cortical connectivity from two-photon imaging data.

    PubMed

    Bertrán, Martín A; Martínez, Natalia L; Wang, Ye; Dunson, David; Sapiro, Guillermo; Ringach, Dario

    2018-01-01

    Understanding how groups of neurons interact within a network is a fundamental question in system neuroscience. Instead of passively observing the ongoing activity of a network, we can typically perturb its activity, either by external sensory stimulation or directly via techniques such as two-photon optogenetics. A natural question is how to use such perturbations to identify the connectivity of the network efficiently. Here we introduce a method to infer sparse connectivity graphs from in-vivo, two-photon imaging of population activity in response to external stimuli. A novel aspect of the work is the introduction of a recommended distribution, incrementally learned from the data, to optimally refine the inferred network. Unlike existing system identification techniques, this "active learning" method automatically focuses its attention on key undiscovered areas of the network, instead of targeting global uncertainty indicators like parameter variance. We show how active learning leads to faster inference while, at the same time, provides confidence intervals for the network parameters. We present simulations on artificial small-world networks to validate the methods and apply the method to real data. Analysis of frequency of motifs recovered show that cortical networks are consistent with a small-world topology model.

  12. Active learning of cortical connectivity from two-photon imaging data

    PubMed Central

    Wang, Ye; Dunson, David; Sapiro, Guillermo; Ringach, Dario

    2018-01-01

    Understanding how groups of neurons interact within a network is a fundamental question in system neuroscience. Instead of passively observing the ongoing activity of a network, we can typically perturb its activity, either by external sensory stimulation or directly via techniques such as two-photon optogenetics. A natural question is how to use such perturbations to identify the connectivity of the network efficiently. Here we introduce a method to infer sparse connectivity graphs from in-vivo, two-photon imaging of population activity in response to external stimuli. A novel aspect of the work is the introduction of a recommended distribution, incrementally learned from the data, to optimally refine the inferred network. Unlike existing system identification techniques, this “active learning” method automatically focuses its attention on key undiscovered areas of the network, instead of targeting global uncertainty indicators like parameter variance. We show how active learning leads to faster inference while, at the same time, provides confidence intervals for the network parameters. We present simulations on artificial small-world networks to validate the methods and apply the method to real data. Analysis of frequency of motifs recovered show that cortical networks are consistent with a small-world topology model. PMID:29718955

  13. Integration of Network Biology and Imaging to Study Cancer Phenotypes and Responses.

    PubMed

    Tian, Ye; Wang, Sean S; Zhang, Zhen; Rodriguez, Olga C; Petricoin, Emanuel; Shih, Ie-Ming; Chan, Daniel; Avantaggiati, Maria; Yu, Guoqiang; Ye, Shaozhen; Clarke, Robert; Wang, Chao; Zhang, Bai; Wang, Yue; Albanese, Chris

    2014-01-01

    Ever growing "omics" data and continuously accumulated biological knowledge provide an unprecedented opportunity to identify molecular biomarkers and their interactions that are responsible for cancer phenotypes that can be accurately defined by clinical measurements such as in vivo imaging. Since signaling or regulatory networks are dynamic and context-specific, systematic efforts to characterize such structural alterations must effectively distinguish significant network rewiring from random background fluctuations. Here we introduced a novel integration of network biology and imaging to study cancer phenotypes and responses to treatments at the molecular systems level. Specifically, Differential Dependence Network (DDN) analysis was used to detect statistically significant topological rewiring in molecular networks between two phenotypic conditions, and in vivo Magnetic Resonance Imaging (MRI) was used to more accurately define phenotypic sample groups for such differential analysis. We applied DDN to analyze two distinct phenotypic groups of breast cancer and study how genomic instability affects the molecular network topologies in high-grade ovarian cancer. Further, FDA-approved arsenic trioxide (ATO) and the ND2-SmoA1 mouse model of Medulloblastoma (MB) were used to extend our analyses of combined MRI and Reverse Phase Protein Microarray (RPMA) data to assess tumor responses to ATO and to uncover the complexity of therapeutic molecular biology.

  14. Integrated clinical workstations for image and text data capture, display, and teleconsultation.

    PubMed Central

    Dayhoff, R.; Kuzmak, P. M.; Kirin, G.

    1994-01-01

    The Department of Veterans Affairs (VA) DHCP Imaging System digitally records clinically significant diagnostic images selected by medical specialists in a variety of hospital departments, including radiology, cardiology, gastroenterology, pathology, dermatology, hematology, surgery, podiatry, dental clinic, and emergency room. These images, which include true color and gray scale images, scanned documents, and electrocardiogram waveforms, are stored on network file servers and displayed on workstations located throughout a medical center. All images are managed by the VA's hospital information system (HIS), allowing integrated displays of text and image data from all medical specialties. Two VA medical centers currently have DHCP Imaging Systems installed, and other installations are underway. PMID:7949899

  15. Low Cost Desktop Image Analysis Workstation With Enhanced Interactive User Interface

    NASA Astrophysics Data System (ADS)

    Ratib, Osman M.; Huang, H. K.

    1989-05-01

    A multimodality picture archiving and communication system (PACS) is in routine clinical use in the UCLA Radiology Department. Several types workstations are currently implemented for this PACS. Among them, the Apple Macintosh II personal computer was recently chosen to serve as a desktop workstation for display and analysis of radiological images. This personal computer was selected mainly because of its extremely friendly user-interface, its popularity among the academic and medical community and its low cost. In comparison to other microcomputer-based systems the Macintosh II offers the following advantages: the extreme standardization of its user interface, file system and networking, and the availability of a very large variety of commercial software packages. In the current configuration the Macintosh II operates as a stand-alone workstation where images are imported from a centralized PACS server through an Ethernet network using a standard TCP-IP protocol, and stored locally on magnetic disk. The use of high resolution screens (1024x768 pixels x 8bits) offer sufficient performance for image display and analysis. We focused our project on the design and implementation of a variety of image analysis algorithms ranging from automated structure and edge detection to sophisticated dynamic analysis of sequential images. Specific analysis programs were developed for ultrasound images, digitized angiograms, MRI and CT tomographic images and scintigraphic images.

  16. Cell segmentation in histopathological images with deep learning algorithms by utilizing spatial relationships.

    PubMed

    Hatipoglu, Nuh; Bilgin, Gokhan

    2017-10-01

    In many computerized methods for cell detection, segmentation, and classification in digital histopathology that have recently emerged, the task of cell segmentation remains a chief problem for image processing in designing computer-aided diagnosis (CAD) systems. In research and diagnostic studies on cancer, pathologists can use CAD systems as second readers to analyze high-resolution histopathological images. Since cell detection and segmentation are critical for cancer grade assessments, cellular and extracellular structures should primarily be extracted from histopathological images. In response, we sought to identify a useful cell segmentation approach with histopathological images that uses not only prominent deep learning algorithms (i.e., convolutional neural networks, stacked autoencoders, and deep belief networks), but also spatial relationships, information of which is critical for achieving better cell segmentation results. To that end, we collected cellular and extracellular samples from histopathological images by windowing in small patches with various sizes. In experiments, the segmentation accuracies of the methods used improved as the window sizes increased due to the addition of local spatial and contextual information. Once we compared the effects of training sample size and influence of window size, results revealed that the deep learning algorithms, especially convolutional neural networks and partly stacked autoencoders, performed better than conventional methods in cell segmentation.

  17. A semi-symmetric image encryption scheme based on the function projective synchronization of two hyperchaotic systems

    PubMed Central

    Li, Jinqing; Qi, Hui; Cong, Ligang; Yang, Huamin

    2017-01-01

    Both symmetric and asymmetric color image encryption have advantages and disadvantages. In order to combine their advantages and try to overcome their disadvantages, chaos synchronization is used to avoid the key transmission for the proposed semi-symmetric image encryption scheme. Our scheme is a hybrid chaotic encryption algorithm, and it consists of a scrambling stage and a diffusion stage. The control law and the update rule of function projective synchronization between the 3-cell quantum cellular neural networks (QCNN) response system and the 6th-order cellular neural network (CNN) drive system are formulated. Since the function projective synchronization is used to synchronize the response system and drive system, Alice and Bob got the key by two different chaotic systems independently and avoid the key transmission by some extra security links, which prevents security key leakage during the transmission. Both numerical simulations and security analyses such as information entropy analysis, differential attack are conducted to verify the feasibility, security, and efficiency of the proposed scheme. PMID:28910349

  18. Unmanned Aerial Vehicle Systems for Remote Estimation of Flooded Areas Based on Complex Image Processing.

    PubMed

    Popescu, Dan; Ichim, Loretta; Stoican, Florin

    2017-02-23

    Floods are natural disasters which cause the most economic damage at the global level. Therefore, flood monitoring and damage estimation are very important for the population, authorities and insurance companies. The paper proposes an original solution, based on a hybrid network and complex image processing, to this problem. As first novelty, a multilevel system, with two components, terrestrial and aerial, was proposed and designed by the authors as support for image acquisition from a delimited region. The terrestrial component contains a Ground Control Station, as a coordinator at distance, which communicates via the internet with more Ground Data Terminals, as a fixed nodes network for data acquisition and communication. The aerial component contains mobile nodes-fixed wing type UAVs. In order to evaluate flood damage, two tasks must be accomplished by the network: area coverage and image processing. The second novelty of the paper consists of texture analysis in a deep neural network, taking into account new criteria for feature selection and patch classification. Color and spatial information extracted from chromatic co-occurrence matrix and mass fractal dimension were used as well. Finally, the experimental results in a real mission demonstrate the validity of the proposed methodologies and the performances of the algorithms.

  19. Building a gateway with open source software for secure-DICOM communication over insecure networks

    NASA Astrophysics Data System (ADS)

    Emmel, Dirk; Ricke, Jens; Stohlmann, Lutz; Haderer, Alexander; Felix, Roland

    2002-05-01

    For Teleradiology the exchange of DICOM-images is needed for several purposes. Existing solutions often don't consider about the needs for data security and data privacy. Communication is done without any encryption over insecure networks or with encryption using proprietary solutions, which reduces the data communication possibilities to partners with the same equipment. Our goal was to build a gateway, which offers a transparent solution for secure DICOM-communication in a heterogeneous environment We developed a PC-based gateway system with DICOM-communication to the in-house network and secure DICOM communication for the communication over the insecure network. One gateway installed at each location is responsible for encryption/decryption. The sender just transfers the image data over the DICOM protocol to the local gateway. The gateway forwards the data to the gateway on the destination site using the secure DICOM protocol, which is part of the DICOM standard. The receiving gateway forwards the image data to the final destination again using the DICOM-Protocol. The gateway is based on Open Source software and runs under several operating systems. Our experience shows a reliable solution, which solves security issues for DICOM communication of image data and integrates seamless into a heterogeneous DICOM environment.

  20. Unmanned Aerial Vehicle Systems for Remote Estimation of Flooded Areas Based on Complex Image Processing

    PubMed Central

    Popescu, Dan; Ichim, Loretta; Stoican, Florin

    2017-01-01

    Floods are natural disasters which cause the most economic damage at the global level. Therefore, flood monitoring and damage estimation are very important for the population, authorities and insurance companies. The paper proposes an original solution, based on a hybrid network and complex image processing, to this problem. As first novelty, a multilevel system, with two components, terrestrial and aerial, was proposed and designed by the authors as support for image acquisition from a delimited region. The terrestrial component contains a Ground Control Station, as a coordinator at distance, which communicates via the internet with more Ground Data Terminals, as a fixed nodes network for data acquisition and communication. The aerial component contains mobile nodes—fixed wing type UAVs. In order to evaluate flood damage, two tasks must be accomplished by the network: area coverage and image processing. The second novelty of the paper consists of texture analysis in a deep neural network, taking into account new criteria for feature selection and patch classification. Color and spatial information extracted from chromatic co-occurrence matrix and mass fractal dimension were used as well. Finally, the experimental results in a real mission demonstrate the validity of the proposed methodologies and the performances of the algorithms. PMID:28241479

  1. Multisensor Network System for Wildfire Detection Using Infrared Image Processing

    PubMed Central

    Bosch, I.; Serrano, A.; Vergara, L.

    2013-01-01

    This paper presents the next step in the evolution of multi-sensor wireless network systems in the early automatic detection of forest fires. This network allows remote monitoring of each of the locations as well as communication between each of the sensors and with the control stations. The result is an increased coverage area, with quicker and safer responses. To determine the presence of a forest wildfire, the system employs decision fusion in thermal imaging, which can exploit various expected characteristics of a real fire, including short-term persistence and long-term increases over time. Results from testing in the laboratory and in a real environment are presented to authenticate and verify the accuracy of the operation of the proposed system. The system performance is gauged by the number of alarms and the time to the first alarm (corresponding to a real fire), for different probability of false alarm (PFA). The necessity of including decision fusion is thereby demonstrated. PMID:23843734

  2. Multisensor network system for wildfire detection using infrared image processing.

    PubMed

    Bosch, I; Serrano, A; Vergara, L

    2013-01-01

    This paper presents the next step in the evolution of multi-sensor wireless network systems in the early automatic detection of forest fires. This network allows remote monitoring of each of the locations as well as communication between each of the sensors and with the control stations. The result is an increased coverage area, with quicker and safer responses. To determine the presence of a forest wildfire, the system employs decision fusion in thermal imaging, which can exploit various expected characteristics of a real fire, including short-term persistence and long-term increases over time. Results from testing in the laboratory and in a real environment are presented to authenticate and verify the accuracy of the operation of the proposed system. The system performance is gauged by the number of alarms and the time to the first alarm (corresponding to a real fire), for different probability of false alarm (PFA). The necessity of including decision fusion is thereby demonstrated.

  3. Cross-Domain Shoe Retrieval with a Semantic Hierarchy of Attribute Classification Network.

    PubMed

    Zhan, Huijing; Shi, Boxin; Kot, Alex C

    2017-08-04

    Cross-domain shoe image retrieval is a challenging problem, because the query photo from the street domain (daily life scenario) and the reference photo in the online domain (online shop images) have significant visual differences due to the viewpoint and scale variation, self-occlusion, and cluttered background. This paper proposes the Semantic Hierarchy Of attributE Convolutional Neural Network (SHOE-CNN) with a three-level feature representation for discriminative shoe feature expression and efficient retrieval. The SHOE-CNN with its newly designed loss function systematically merges semantic attributes of closer visual appearances to prevent shoe images with the obvious visual differences being confused with each other; the features extracted from image, region, and part levels effectively match the shoe images across different domains. We collect a large-scale shoe dataset composed of 14341 street domain and 12652 corresponding online domain images with fine-grained attributes to train our network and evaluate our system. The top-20 retrieval accuracy improves significantly over the solution with the pre-trained CNN features.

  4. Localization of lung fields in HRCT images using a deep convolution neural network

    NASA Astrophysics Data System (ADS)

    Kumar, Abhishek; Agarwala, Sunita; Dhara, Ashis Kumar; Mukhopadhyay, Sudipta; Nandi, Debashis; Garg, Mandeep; Khandelwal, Niranjan; Kalra, Naveen

    2018-02-01

    Lung field segmentation is a prerequisite step for the development of a computer-aided diagnosis system for interstitial lung diseases observed in chest HRCT images. Conventional methods of lung field segmentation rely on a large gray value contrast between lung fields and surrounding tissues. These methods fail on lung HRCT images with dense and diffused pathology. An efficient prepro- cessing could improve the accuracy of segmentation of pathological lung field in HRCT images. In this paper, a convolution neural network is used for localization of lung fields in HRCT images. The proposed method provides an optimal bounding box enclosing the lung fields irrespective of the presence of diffuse pathology. The performance of the proposed algorithm is validated on 330 lung HRCT images obtained from MedGift database on ZF and VGG networks. The model achieves a mean average precision of 0.94 with ZF net and a slightly better performance giving a mean average precision of 0.95 in case of VGG net.

  5. Thin client performance for remote 3-D image display.

    PubMed

    Lai, Albert; Nieh, Jason; Laine, Andrew; Starren, Justin

    2003-01-01

    Several trends in biomedical computing are converging in a way that will require new approaches to telehealth image display. Image viewing is becoming an "anytime, anywhere" activity. In addition, organizations are beginning to recognize that healthcare providers are highly mobile and optimal care requires providing information wherever the provider and patient are. Thin-client computing is one way to support image viewing this complex environment. However little is known about the behavior of thin client systems in supporting image transfer in modern heterogeneous networks. Our results show that using thin-clients can deliver acceptable performance over conditions commonly seen in wireless networks if newer protocols optimized for these conditions are used.

  6. Phytoplankton Imaging and Analysis System: Instrumentation for Field and Laboratory Acquisition, Analysis and WWW/LAN-Based Sharing of Marine Phytoplankton Data (DURIP)

    DTIC Science & Technology

    2000-09-30

    networks (LAN), (3) quantifying size, shape, and other parameters of plankton cells and colonies via image analysis and image reconstruction, and (4) creating educational materials (e.g. lectures, videos etc.).

  7. Unsupervised texture image segmentation by improved neural network ART2

    NASA Technical Reports Server (NTRS)

    Wang, Zhiling; Labini, G. Sylos; Mugnuolo, R.; Desario, Marco

    1994-01-01

    We here propose a segmentation algorithm of texture image for a computer vision system on a space robot. An improved adaptive resonance theory (ART2) for analog input patterns is adapted to classify the image based on a set of texture image features extracted by a fast spatial gray level dependence method (SGLDM). The nonlinear thresholding functions in input layer of the neural network have been constructed by two parts: firstly, to reduce the effects of image noises on the features, a set of sigmoid functions is chosen depending on the types of the feature; secondly, to enhance the contrast of the features, we adopt fuzzy mapping functions. The cluster number in output layer can be increased by an autogrowing mechanism constantly when a new pattern happens. Experimental results and original or segmented pictures are shown, including the comparison between this approach and K-means algorithm. The system written in C language is performed on a SUN-4/330 sparc-station with an image board IT-150 and a CCD camera.

  8. Constrained Deep Weak Supervision for Histopathology Image Segmentation.

    PubMed

    Jia, Zhipeng; Huang, Xingyi; Chang, Eric I-Chao; Xu, Yan

    2017-11-01

    In this paper, we develop a new weakly supervised learning algorithm to learn to segment cancerous regions in histopathology images. This paper is under a multiple instance learning (MIL) framework with a new formulation, deep weak supervision (DWS); we also propose an effective way to introduce constraints to our neural networks to assist the learning process. The contributions of our algorithm are threefold: 1) we build an end-to-end learning system that segments cancerous regions with fully convolutional networks (FCNs) in which image-to-image weakly-supervised learning is performed; 2) we develop a DWS formulation to exploit multi-scale learning under weak supervision within FCNs; and 3) constraints about positive instances are introduced in our approach to effectively explore additional weakly supervised information that is easy to obtain and enjoy a significant boost to the learning process. The proposed algorithm, abbreviated as DWS-MIL, is easy to implement and can be trained efficiently. Our system demonstrates the state-of-the-art results on large-scale histopathology image data sets and can be applied to various applications in medical imaging beyond histopathology images, such as MRI, CT, and ultrasound images.

  9. TME10/380: Remote Transmission of Radiological Images by means of Intranet/Internet Technology

    PubMed Central

    Sicurello, F; Pizzi, R

    1999-01-01

    At the Istituto Nazionale Neurologico C. Besta in Milano a network architecture has been developed to connect computers and diagnostic modalities, based on Intranet technology in order to allow the hospital to have an external access through the Internet. The Internet technology has become the "glue" that allows to link different computers and to develop applications able to work independently from the hardware/software platform. Using a PACS (Picture Archiving and Communication System) system integrated to the diagnostic modalities by means of the standardized DICOM image format, the digital radiological images can be transferred, displayed and processed on special visualization workstations all around the hospital. From the workstations the same images can be transferred in DICOM format to a teleconsulting workstation. In fact the hospital is involved in a national project for the remote connection between many Italian hospitals. This national network is linked to already developed regional networks like the Toscana MAN and the ATM Sirius Network. Some links are performed directly in ATM (155 Mbps), others are based on CDN (Direct Numerical Connection, 2Mbps), others are simply based on ISDN connections. The system allows to make it simpler and faster the already established daily exchange of radiological reports between the involved hospitals, especially from Istituto Nazionale Neurologico and Istituto Nazionale deiTumori. All the actions performed by the radiologist are translated by the software into "events" and replied to the remote workstation and vice-versa. In this way the radiologists can see each others, speak together and act in real time on a common "board" of diagnostic images, each one with his own pointer. The adopted technology is evolving on a system based on a web architecture and Java applications, useful for small clinical centers not endowed with expensive information systems. These centers will be able to get consulting performances by the excellence centers, making available accurate diagnoses and therapy protocols.

  10. Neural Networks for Signal Processing and Control

    NASA Astrophysics Data System (ADS)

    Hesselroth, Ted Daniel

    Neural networks are developed for controlling a robot-arm and camera system and for processing images. The networks are based upon computational schemes that may be found in the brain. In the first network, a neural map algorithm is employed to control a five-joint pneumatic robot arm and gripper through feedback from two video cameras. The pneumatically driven robot arm employed shares essential mechanical characteristics with skeletal muscle systems. To control the position of the arm, 200 neurons formed a network representing the three-dimensional workspace embedded in a four-dimensional system of coordinates from the two cameras, and learned a set of pressures corresponding to the end effector positions, as well as a set of Jacobian matrices for interpolating between these positions. Because of the properties of the rubber-tube actuators of the arm, the position as a function of supplied pressure is nonlinear, nonseparable, and exhibits hysteresis. Nevertheless, through the neural network learning algorithm the position could be controlled to an accuracy of about one pixel (~3 mm) after two hundred learning steps. Applications of repeated corrections in each step via the Jacobian matrices leads to a very robust control algorithm since the Jacobians learned by the network have to satisfy the weak requirement that they yield a reduction of the distance between gripper and target. The second network is proposed as a model for the mammalian vision system in which backward connections from the primary visual cortex (V1) to the lateral geniculate nucleus play a key role. The application of hebbian learning to the forward and backward connections causes the formation of receptive fields which are sensitive to edges, bars, and spatial frequencies of preferred orientations. The receptive fields are learned in such a way as to maximize the rate of transfer of information from the LGN to V1. Orientational preferences are organized into a feature map in the primary visual cortex by the application of lateral interactions during the learning phase. The organization of the mature network is compared to that found in the macaque monkey by several analytical tests. The capacity of the network to process images is investigated. By a method of reconstructing the input images in terms of V1 activities, the simulations show that images can be faithfully represented in V1 by the proposed network. The signal-to-noise ratio of the image is improved by the representation, and compression ratios of well over two-hundred are possible. Lateral interactions between V1 neurons sharpen their orientational tuning. We further study the dynamics of the processing, showing that the rate of decrease of the error of the reconstruction is maximized for the receptive fields used. Lastly, we employ a Fokker-Planck equation for a more detailed prediction of the error value vs. time. The Fokker-Planck equation for an underdamped system with a driving force is derived, yielding an energy-dependent diffusion coefficient which is the integral of the spectral densities of the force and the velocity of the system. The theory is applied to correlated noise activation and resonant activation. Simulation results for the error of the network vs time are compared to the solution of the Fokker-Planck equation.

  11. Automated imaging system for single molecules

    DOEpatents

    Schwartz, David Charles; Runnheim, Rodney; Forrest, Daniel

    2012-09-18

    There is provided a high throughput automated single molecule image collection and processing system that requires minimal initial user input. The unique features embodied in the present disclosure allow automated collection and initial processing of optical images of single molecules and their assemblies. Correct focus may be automatically maintained while images are collected. Uneven illumination in fluorescence microscopy is accounted for, and an overall robust imaging operation is provided yielding individual images prepared for further processing in external systems. Embodiments described herein are useful in studies of any macromolecules such as DNA, RNA, peptides and proteins. The automated image collection and processing system and method of same may be implemented and deployed over a computer network, and may be ergonomically optimized to facilitate user interaction.

  12. Detection of micro gap weld joint by using magneto-optical imaging and Kalman filtering compensated with RBF neural network

    NASA Astrophysics Data System (ADS)

    Gao, Xiangdong; Chen, Yuquan; You, Deyong; Xiao, Zhenlin; Chen, Xiaohui

    2017-02-01

    An approach for seam tracking of micro gap weld whose width is less than 0.1 mm based on magneto optical (MO) imaging technique during butt-joint laser welding of steel plates is investigated. Kalman filtering(KF) technology with radial basis function(RBF) neural network for weld detection by an MO sensor was applied to track the weld center position. Because the laser welding system process noises and the MO sensor measurement noises were colored noises, the estimation accuracy of traditional KF for seam tracking was degraded by the system model with extreme nonlinearities and could not be solved by the linear state-space model. Also, the statistics characteristics of noises could not be accurately obtained in actual welding. Thus, a RBF neural network was applied to the KF technique to compensate for the weld tracking errors. The neural network can restrain divergence filter and improve the system robustness. In comparison of traditional KF algorithm, the RBF with KF was not only more effectively in improving the weld tracking accuracy but also reduced noise disturbance. Experimental results showed that magneto optical imaging technique could be applied to detect micro gap weld accurately, which provides a novel approach for micro gap seam tracking.

  13. Micro-MRI-based image acquisition and processing system for assessing the response to therapeutic intervention

    NASA Astrophysics Data System (ADS)

    Vasilić, B.; Ladinsky, G. A.; Saha, P. K.; Wehrli, F. W.

    2006-03-01

    Osteoporosis is the cause of over 1.5 million bone fractures annually. Most of these fractures occur in sites rich in trabecular bone, a complex network of bony struts and plates found throughout the skeleton. The three-dimensional structure of the trabecular bone network significantly determines mechanical strength and thus fracture resistance. Here we present a data acquisition and processing system that allows efficient noninvasive assessment of trabecular bone structure through a "virtual bone biopsy". High-resolution MR images are acquired from which the trabecular bone network is extracted by estimating the partial bone occupancy of each voxel. A heuristic voxel subdivision increases the effective resolution of the bone volume fraction map and serves a basis for subsequent analysis of topological and orientational parameters. Semi-automated registration and segmentation ensure selection of the same anatomical location in subjects imaged at different time points during treatment. It is shown with excerpts from an ongoing clinical study of early post-menopausal women, that significant reduction in network connectivity occurs in the control group while the structural integrity is maintained in the hormone replacement group. The system described should be suited for large-scale studies designed to evaluate the efficacy of therapeutic intervention in subjects with metabolic bone disease.

  14. Integrating data from biological experiments into metabolic networks with the DBE information system.

    PubMed

    Borisjuk, Ljudmilla; Hajirezaei, Mohammad-Reza; Klukas, Christian; Rolletschek, Hardy; Schreiber, Falk

    2005-01-01

    Modern 'omics'-technologies result in huge amounts of data about life processes. For analysis and data mining purposes this data has to be considered in the context of the underlying biological networks. This work presents an approach for integrating data from biological experiments into metabolic networks by mapping the data onto network elements and visualising the data enriched networks automatically. This methodology is implemented in DBE, an information system that supports the analysis and visualisation of experimental data in the context of metabolic networks. It consists of five parts: (1) the DBE-Database for consistent data storage, (2) the Excel-Importer application for the data import, (3) the DBE-Website as the interface for the system, (4) the DBE-Pictures application for the up- and download of binary (e. g. image) files, and (5) DBE-Gravisto, a network analysis and graph visualisation system. The usability of this approach is demonstrated in two examples.

  15. Automated endoscopic navigation and advisory system from medical image

    NASA Astrophysics Data System (ADS)

    Kwoh, Chee K.; Khan, Gul N.; Gillies, Duncan F.

    1999-05-01

    In this paper, we present a review of the research conducted by our group to design an automatic endoscope navigation and advisory system. The whole system can be viewed as a two-layer system. The first layer is at the signal level, which consists of the processing that will be performed on a series of images to extract all the identifiable features. The information is purely dependent on what can be extracted from the 'raw' images. At the signal level, the first task is performed by detecting a single dominant feature, lumen. Few methods of identifying the lumen are proposed. The first method used contour extraction. Contours are extracted by edge detection, thresholding and linking. This method required images to be divided into overlapping squares (8 by 8 or 4 by 4) where line segments are extracted by using a Hough transform. Perceptual criteria such as proximity, connectivity, similarity in orientation, contrast and edge pixel intensity, are used to group edges both strong and weak. This approach is called perceptual grouping. The second method is based on a region extraction using split and merge approach using spatial domain data. An n-level (for a 2' by 2' image) quadtree based pyramid structure is constructed to find the most homogenous large dark region, which in most cases corresponds to the lumen. The algorithm constructs the quadtree from the bottom (pixel) level upward, recursively and computes the mean and variance of image regions corresponding to quadtree nodes. On reaching the root, the largest uniform seed region, whose mean corresponds to a lumen is selected that is grown by merging with its neighboring regions. In addition to the use of two- dimensional information in the form of regions and contours, three-dimensional shape can provide additional information that will enhance the system capabilities. Shape or depth information from an image is estimated by various methods. A particular technique suitable for endoscopy is the shape from shading, which is developed to obtain the relative depth of the colon surface in the image by assuming a point light source very close to the camera. If we assume the colon has a shape similar to a tube, then a reasonable approximation of the position of the center of the colon (lumen) will be a function of the direction in which the majority of the normal vectors of shape are pointing. The second layer is the control layer and at this level, a decision model must be built for endoscope navigation and advisory system. The system that we built is the models of probabilistic networks that create a basic, artificial intelligence system for navigation in the colon. We have constructed the probabilistic networks from correlated objective data using the maximum weighted spanning tree algorithm. In the construction of a probabilistic network, it is always assumed that the variables starting from the same parent are conditionally independent. However, this may not hold and will give rise to incorrect inferences. In these cases, we proposed the creation of a hidden node to modify the network topology, which in effect models the dependency of correlated variables, to solve the problem. The conditional probability matrices linking the hidden node to its neighbors are determined using a gradient descent method which minimizing the objective cost function. The error gradients can be treated as updating messages and ca be propagated in any direction throughout any singly connected network to adjust the network parameters. With the above two- level approach, we have been able to build an automated endoscope navigation and advisory system successfully.

  16. An edge preserving differential image coding scheme

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1992-01-01

    Differential encoding techniques are fast and easy to implement. However, a major problem with the use of differential encoding for images is the rapid edge degradation encountered when using such systems. This makes differential encoding techniques of limited utility, especially when coding medical or scientific images, where edge preservation is of utmost importance. A simple, easy to implement differential image coding system with excellent edge preservation properties is presented. The coding system can be used over variable rate channels, which makes it especially attractive for use in the packet network environment.

  17. Digital image classification with the help of artificial neural network by simple histogram

    PubMed Central

    Dey, Pranab; Banerjee, Nirmalya; Kaur, Rajwant

    2016-01-01

    Background: Visual image classification is a great challenge to the cytopathologist in routine day-to-day work. Artificial neural network (ANN) may be helpful in this matter. Aims and Objectives: In this study, we have tried to classify digital images of malignant and benign cells in effusion cytology smear with the help of simple histogram data and ANN. Materials and Methods: A total of 404 digital images consisting of 168 benign cells and 236 malignant cells were selected for this study. The simple histogram data was extracted from these digital images and an ANN was constructed with the help of Neurointelligence software [Alyuda Neurointelligence 2.2 (577), Cupertino, California, USA]. The network architecture was 6-3-1. The images were classified as training set (281), validation set (63), and test set (60). The on-line backpropagation training algorithm was used for this study. Result: A total of 10,000 iterations were done to train the ANN system with the speed of 609.81/s. After the adequate training of this ANN model, the system was able to identify all 34 malignant cell images and 24 out of 26 benign cells. Conclusion: The ANN model can be used for the identification of the individual malignant cells with the help of simple histogram data. This study will be helpful in the future to identify malignant cells in unknown situations. PMID:27279679

  18. Image acquisition system for traffic monitoring applications

    NASA Astrophysics Data System (ADS)

    Auty, Glen; Corke, Peter I.; Dunn, Paul; Jensen, Murray; Macintyre, Ian B.; Mills, Dennis C.; Nguyen, Hao; Simons, Ben

    1995-03-01

    An imaging system for monitoring traffic on multilane highways is discussed. The system, named Safe-T-Cam, is capable of operating 24 hours per day in all but extreme weather conditions and can capture still images of vehicles traveling up to 160 km/hr. Systems operating at different remote locations are networked to allow transmission of images and data to a control center. A remote site facility comprises a vehicle detection and classification module (VCDM), an image acquisition module (IAM) and a license plate recognition module (LPRM). The remote site is connected to the central site by an ISDN communications network. The remote site system is discussed in this paper. The VCDM consists of a video camera, a specialized exposure control unit to maintain consistent image characteristics, and a 'real-time' image processing system that processes 50 images per second. The VCDM can detect and classify vehicles (e.g. cars from trucks). The vehicle class is used to determine what data should be recorded. The VCDM uses a vehicle tracking technique to allow optimum triggering of the high resolution camera of the IAM. The IAM camera combines the features necessary to operate consistently in the harsh environment encountered when imaging a vehicle 'head-on' in both day and night conditions. The image clarity obtained is ideally suited for automatic location and recognition of the vehicle license plate. This paper discusses the camera geometry, sensor characteristics and the image processing methods which permit consistent vehicle segmentation from a cluttered background allowing object oriented pattern recognition to be used for vehicle classification. The image capture of high resolution images and the image characteristics required for the LPRMs automatic reading of vehicle license plates, is also discussed. The results of field tests presented demonstrate that the vision based Safe-T-Cam system, currently installed on open highways, is capable of producing automatic classification of vehicle class and recording of vehicle numberplates with a success rate around 90 percent in a period of 24 hours.

  19. [Construction of DICOM-WWW gateway by open source, and application to PDAs using the high-speed mobile communications network].

    PubMed

    Yokohama, Noriya

    2003-09-01

    The author constructed a medical image network system using open source software that took security into consideration. This system was enabled for search and browse with a WWW browser, and images were stored in a DICOM server. In order to realize this function, software was developed to fill in the gap between the DICOM protocol and HTTP using PHP language. The transmission speed was evaluated by the difference in protocols between DICOM and HTTP. Furthermore, an attempt was made to evaluate the convenience of medical image access with a personal information terminal via the Internet through the high-speed mobile communication terminal. Results suggested the feasibility of remote diagnosis and application to emergency care.

  20. New model for distributed multimedia databases and its application to networking of museums

    NASA Astrophysics Data System (ADS)

    Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki

    1998-02-01

    This paper proposes a new distributed multimedia data base system where the databases storing MPEG-2 videos and/or super high definition images are connected together through the B-ISDN's, and also refers to an example of the networking of museums on the basis of the proposed database system. The proposed database system introduces a new concept of the 'retrieval manager' which functions an intelligent controller so that the user can recognize a set of image databases as one logical database. A user terminal issues a request to retrieve contents to the retrieval manager which is located in the nearest place to the user terminal on the network. Then, the retrieved contents are directly sent through the B-ISDN's to the user terminal from the server which stores the designated contents. In this case, the designated logical data base dynamically generates the best combination of such a retrieving parameter as a data transfer path referring to directly or data on the basis of the environment of the system. The generated retrieving parameter is then executed to select the most suitable data transfer path on the network. Therefore, the best combination of these parameters fits to the distributed multimedia database system.

  1. Proteus: a reconfigurable computational network for computer vision

    NASA Astrophysics Data System (ADS)

    Haralick, Robert M.; Somani, Arun K.; Wittenbrink, Craig M.; Johnson, Robert; Cooper, Kenneth; Shapiro, Linda G.; Phillips, Ihsin T.; Hwang, Jenq N.; Cheung, William; Yao, Yung H.; Chen, Chung-Ho; Yang, Larry; Daugherty, Brian; Lorbeski, Bob; Loving, Kent; Miller, Tom; Parkins, Larye; Soos, Steven L.

    1992-04-01

    The Proteus architecture is a highly parallel MIMD, multiple instruction, multiple-data machine, optimized for large granularity tasks such as machine vision and image processing The system can achieve 20 Giga-flops (80 Giga-flops peak). It accepts data via multiple serial links at a rate of up to 640 megabytes/second. The system employs a hierarchical reconfigurable interconnection network with the highest level being a circuit switched Enhanced Hypercube serial interconnection network for internal data transfers. The system is designed to use 256 to 1,024 RISC processors. The processors use one megabyte external Read/Write Allocating Caches for reduced multiprocessor contention. The system detects, locates, and replaces faulty subsystems using redundant hardware to facilitate fault tolerance. The parallelism is directly controllable through an advanced software system for partitioning, scheduling, and development. System software includes a translator for the INSIGHT language, a parallel debugger, low and high level simulators, and a message passing system for all control needs. Image processing application software includes a variety of point operators neighborhood, operators, convolution, and the mathematical morphology operations of binary and gray scale dilation, erosion, opening, and closing.

  2. Intelligent Network-Centric Sensors Development Program

    DTIC Science & Technology

    2012-07-31

    Image sensor Configuration: ; Cone 360 degree LWIR PFx Sensor: •■. Image sensor . Configuration: Image MWIR Configuration; Cone 360 degree... LWIR PFx Sensor: Video Configuration: Cone 360 degree SW1R, 2. Reasoning Process to Match Sensor Systems to Algorithms The ontological...effects of coherent imaging because of aberrations. Another reason is the specular nature of active imaging. Both contribute to the nonuniformity

  3. Imaging complex nutrient dynamics in mycelial networks.

    PubMed

    Fricker, M D; Lee, J A; Bebber, D P; Tlalka, M; Hynes, J; Darrah, P R; Watkinson, S C; Boddy, L

    2008-08-01

    Transport networks are vital components of multi-cellular organisms, distributing nutrients and removing waste products. Animal cardiovascular and respiratory systems, and plant vasculature, are branching trees whose architecture is thought to determine universal scaling laws in these organisms. In contrast, the transport systems of many multi-cellular fungi do not fit into this conceptual framework, as they have evolved to explore a patchy environment in search of new resources, rather than ramify through a three-dimensional organism. These fungi grow as a foraging mycelium, formed by the branching and fusion of threadlike hyphae, that gives rise to a complex network. To function efficiently, the mycelial network must both transport nutrients between spatially separated source and sink regions and also maintain its integrity in the face of continuous attack by mycophagous insects or random damage. Here we review the development of novel imaging approaches and software tools that we have used to characterise nutrient transport and network formation in foraging mycelia over a range of spatial scales. On a millimetre scale, we have used a combination of time-lapse confocal imaging and fluorescence recovery after photobleaching to quantify the rate of diffusive transport through the unique vacuole system in individual hyphae. These data then form the basis of a simulation model to predict the impact of such diffusion-based movement on a scale of several millimetres. On a centimetre scale, we have used novel photon-counting scintillation imaging techniques to visualize radiolabel movement in small microcosms. This approach has revealed novel N-transport phenomena, including rapid, preferential N-resource allocation to C-rich sinks, induction of simultaneous bi-directional transport, abrupt switching between different pre-existing transport routes, and a strong pulsatile component to transport in some species. Analysis of the pulsatile transport component using Fourier techniques shows that as the colony forms, it self-organizes into well demarcated domains that are identifiable by differences in the phase relationship of the pulses. On the centimetre to metre scale, we have begun to use techniques borrowed from graph theory to characterize the development and dynamics of the network, and used these abstracted network models to predict the transport characteristics, resilience, and cost of the network.

  4. Feasibility Study on a Portable Field Pest Classification System Design Based on DSP and 3G Wireless Communication Technology

    PubMed Central

    Han, Ruizhen; He, Yong; Liu, Fei

    2012-01-01

    This paper presents a feasibility study on a real-time in field pest classification system design based on Blackfin DSP and 3G wireless communication technology. This prototype system is composed of remote on-line classification platform (ROCP), which uses a digital signal processor (DSP) as a core CPU, and a host control platform (HCP). The ROCP is in charge of acquiring the pest image, extracting image features and detecting the class of pest using an Artificial Neural Network (ANN) classifier. It sends the image data, which is encoded using JPEG 2000 in DSP, to the HCP through the 3G network at the same time for further identification. The image transmission and communication are accomplished using 3G technology. Our system transmits the data via a commercial base station. The system can work properly based on the effective coverage of base stations, no matter the distance from the ROCP to the HCP. In the HCP, the image data is decoded and the pest image displayed in real-time for further identification. Authentication and performance tests of the prototype system were conducted. The authentication test showed that the image data were transmitted correctly. Based on the performance test results on six classes of pests, the average accuracy is 82%. Considering the different live pests’ pose and different field lighting conditions, the result is satisfactory. The proposed technique is well suited for implementation in field pest classification on-line for precision agriculture. PMID:22736996

  5. Feasibility study on a portable field pest classification system design based on DSP and 3G wireless communication technology.

    PubMed

    Han, Ruizhen; He, Yong; Liu, Fei

    2012-01-01

    This paper presents a feasibility study on a real-time in field pest classification system design based on Blackfin DSP and 3G wireless communication technology. This prototype system is composed of remote on-line classification platform (ROCP), which uses a digital signal processor (DSP) as a core CPU, and a host control platform (HCP). The ROCP is in charge of acquiring the pest image, extracting image features and detecting the class of pest using an Artificial Neural Network (ANN) classifier. It sends the image data, which is encoded using JPEG 2000 in DSP, to the HCP through the 3G network at the same time for further identification. The image transmission and communication are accomplished using 3G technology. Our system transmits the data via a commercial base station. The system can work properly based on the effective coverage of base stations, no matter the distance from the ROCP to the HCP. In the HCP, the image data is decoded and the pest image displayed in real-time for further identification. Authentication and performance tests of the prototype system were conducted. The authentication test showed that the image data were transmitted correctly. Based on the performance test results on six classes of pests, the average accuracy is 82%. Considering the different live pests' pose and different field lighting conditions, the result is satisfactory. The proposed technique is well suited for implementation in field pest classification on-line for precision agriculture.

  6. GLOBECOM '84 - Global Telecommunications Conference, Atlanta, GA, November 26-29, 1984, Conference Record. Volume 3

    NASA Astrophysics Data System (ADS)

    Attention is given to aspects of quality assurance methodologies in development life cycles, optical intercity transmission systems, multiaccess protocols, system and technology aspects in the case of regional/domestic satellites, advances in SSB-AM radio transmission over terrestrial and satellite network, and development environments for telecommunications systems. Other subjects studied are concerned with business communication networks for voice and data, VLSI in local network and communication protocol, product evaluation and support, an update regarding Videotex, topics in communication theory, topics in radio propagation, a status report regarding societal effects of technology in the workplace, digital image processing, and adaptive signal processing for communications. The management of the reliability function in the development process is considered along with Giga-bit technologies for long distance large capacity optical transmission equipment. The application of gallium arsenide analog and digital integrated circuits for high-speed fiber optical communications, and a simple algorithm for image data coding.

  7. [Development of a secure and cost-effective infrastructure for the access of arbitrary web-based image distribution systems].

    PubMed

    Hackländer, T; Kleber, K; Schneider, H; Demabre, N; Cramer, B M

    2004-08-01

    To build an infrastructure that enables radiologists on-call and external users a teleradiological access to the HTML-based image distribution system inside the hospital via internet. In addition, no investment costs should arise on the user side and the image data should be sent renamed using cryptographic techniques. A pure HTML-based system manages the image distribution inside the hospital, with an open source project extending this system through a secure gateway outside the firewall of the hospital. The gateway handles the communication between the external users and the HTML server within the network of the hospital. A second firewall is installed between the gateway and the external users and builds up a virtual private network (VPN). A connection between the gateway and the external user is only acknowledged if the computers involved authenticate each other via certificates and the external users authenticate via a multi-stage password system. All data are transferred encrypted. External users get only access to images that have been renamed to a pseudonym by means of automated processing before. With an ADSL internet access, external users achieve an image load frequency of 0.4 CT images per second. More than 90 % of the delay during image transfer results from security checks within the firewalls. Data passing the gateway induce no measurable delay. Project goals were realized by means of an infrastructure that works vendor independently with any HTML-based image distribution systems. The requirements of data security were realized using state-of-the-art web techniques. Adequate access and transfer speed lead to a widespread acceptance of the system on the part of external users.

  8. Retina vascular network recognition

    NASA Astrophysics Data System (ADS)

    Tascini, Guido; Passerini, Giorgio; Puliti, Paolo; Zingaretti, Primo

    1993-09-01

    The analysis of morphological and structural modifications of the retina vascular network is an interesting investigation method in the study of diabetes and hypertension. Normally this analysis is carried out by qualitative evaluations, according to standardized criteria, though medical research attaches great importance to quantitative analysis of vessel color, shape and dimensions. The paper describes a system which automatically segments and recognizes the ocular fundus circulation and micro circulation network, and extracts a set of features related to morphometric aspects of vessels. For this class of images the classical segmentation methods seem weak. We propose a computer vision system in which segmentation and recognition phases are strictly connected. The system is hierarchically organized in four modules. Firstly the Image Enhancement Module (IEM) operates a set of custom image enhancements to remove blur and to prepare data for subsequent segmentation and recognition processes. Secondly the Papilla Border Analysis Module (PBAM) automatically recognizes number, position and local diameter of blood vessels departing from optical papilla. Then the Vessel Tracking Module (VTM) analyses vessels comparing the results of body and edge tracking and detects branches and crossings. Finally the Feature Extraction Module evaluates PBAM and VTM output data and extracts some numerical indexes. Used algorithms appear to be robust and have been successfully tested on various ocular fundus images.

  9. Continued implementation and testing of a Neighborhood Office Center (NOC) and integration of the NOC with an administrative correspondence management information system

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The concept of decentralized (remote) neighborhood offices, linked together through a self-sustaining communications network for exchanging voice messages, video images, and digital data was quantitatively evaluated. Hardware and procedures for the integrated multifunctional system were developed. The configuration of the neighborhood office center (NOC) is explained, its production statistics given, and an experiment for NOC network integration via satellite is described. The hardware selected for the integration NOC/management information system is discussed, and the NASA teleconferencing network is evaluated.

  10. Extending the multimedia patient record across the wide area network.

    PubMed Central

    Dayhoff, R. E.; Kuzmak, P. M.; Frank, S. A.; Kirin, G.; Saddler, C.

    1996-01-01

    The Dept. of Veterans Affairs is developing and testing a wide area medical network with multimedia capabilities for coordination and consolidation of medical services across locations. The system is composed of multimedia information systems at individual medical centers connected by a high speed wide area network. The DHCP Imaging System, which has been in clinical use for six years, provides storage management and workstation acquisition and display of the multimedia data. Teleconsulting capability using a variety of mechanisms' is being prototyped and tested to meet medical staffing and consultation needs. PMID:8947747

  11. Extending the multimedia patient record across the wide area network.

    PubMed

    Dayhoff, R E; Kuzmak, P M; Frank, S A; Kirin, G; Saddler, C

    1996-01-01

    The Dept. of Veterans Affairs is developing and testing a wide area medical network with multimedia capabilities for coordination and consolidation of medical services across locations. The system is composed of multimedia information systems at individual medical centers connected by a high speed wide area network. The DHCP Imaging System, which has been in clinical use for six years, provides storage management and workstation acquisition and display of the multimedia data. Teleconsulting capability using a variety of mechanisms' is being prototyped and tested to meet medical staffing and consultation needs.

  12. Schodack Smart Roadside Inspection System.

    DOT National Transportation Integrated Search

    2013-02-01

    Under an earlier NYSERDA Agreement (17420) Intelligent Imaging Systems (IIS) supplied and installed Smart Roadside network software and integrated new connected vehicle roadside devices into the Schodack Smart Roadside system. The Smart Roadsid...

  13. Wireless local area networking for linking a PC reporting system and PACS: clinical feasibility in emergency reporting.

    PubMed

    Yoshihiro, Akiko; Nakata, Norio; Harada, Junta; Tada, Shimpei

    2002-01-01

    Although local area networks (LANs) are commonplace in hospital-based radiology departments today, wireless LANs are still relatively unknown and untried. A linked wireless reporting system was developed to improve work throughput and efficiency. It allows radiologists, physicians, and technologists to review current radiology reports and images and instantly compare them with reports and images from previous examinations. This reporting system also facilitates creation of teaching files quickly, easily, and accurately. It consists of a Digital Imaging and Communications in Medicine 3.0-based picture archiving and communication system (PACS), a diagnostic report server, and portable laptop computers. The PACS interfaces with magnetic resonance imagers, computed tomographic scanners, and computed radiography equipment. The same kind of functionality is achievable with a wireless LAN as with a wired LAN, with comparable bandwidth but with less cabling infrastructure required. This wireless system is presently incorporated into the operations of the emergency and radiology departments, with future plans calling for applications in operating rooms, outpatient departments, all hospital wards, and intensive care units. No major problems have been encountered with the system, which is in constant use and appears to be quite successful. Copyright RSNA, 2002

  14. An integrative approach for analyzing hundreds of neurons in task performing mice using wide-field calcium imaging.

    PubMed

    Mohammed, Ali I; Gritton, Howard J; Tseng, Hua-an; Bucklin, Mark E; Yao, Zhaojie; Han, Xue

    2016-02-08

    Advances in neurotechnology have been integral to the investigation of neural circuit function in systems neuroscience. Recent improvements in high performance fluorescent sensors and scientific CMOS cameras enables optical imaging of neural networks at a much larger scale. While exciting technical advances demonstrate the potential of this technique, further improvement in data acquisition and analysis, especially those that allow effective processing of increasingly larger datasets, would greatly promote the application of optical imaging in systems neuroscience. Here we demonstrate the ability of wide-field imaging to capture the concurrent dynamic activity from hundreds to thousands of neurons over millimeters of brain tissue in behaving mice. This system allows the visualization of morphological details at a higher spatial resolution than has been previously achieved using similar functional imaging modalities. To analyze the expansive data sets, we developed software to facilitate rapid downstream data processing. Using this system, we show that a large fraction of anatomically distinct hippocampal neurons respond to discrete environmental stimuli associated with classical conditioning, and that the observed temporal dynamics of transient calcium signals are sufficient for exploring certain spatiotemporal features of large neural networks.

  15. Detection of high-grade small bowel obstruction on conventional radiography with convolutional neural networks.

    PubMed

    Cheng, Phillip M; Tejura, Tapas K; Tran, Khoa N; Whang, Gilbert

    2018-05-01

    The purpose of this pilot study is to determine whether a deep convolutional neural network can be trained with limited image data to detect high-grade small bowel obstruction patterns on supine abdominal radiographs. Grayscale images from 3663 clinical supine abdominal radiographs were categorized into obstructive and non-obstructive categories independently by three abdominal radiologists, and the majority classification was used as ground truth; 74 images were found to be consistent with small bowel obstruction. Images were rescaled and randomized, with 2210 images constituting the training set (39 with small bowel obstruction) and 1453 images constituting the test set (35 with small bowel obstruction). Weight parameters for the final classification layer of the Inception v3 convolutional neural network, previously trained on the 2014 Large Scale Visual Recognition Challenge dataset, were retrained on the training set. After training, the neural network achieved an AUC of 0.84 on the test set (95% CI 0.78-0.89). At the maximum Youden index (sensitivity + specificity-1), the sensitivity of the system for small bowel obstruction is 83.8%, with a specificity of 68.1%. The results demonstrate that transfer learning with convolutional neural networks, even with limited training data, may be used to train a detector for high-grade small bowel obstruction gas patterns on supine radiographs.

  16. Mass-storage management for distributed image/video archives

    NASA Astrophysics Data System (ADS)

    Franchi, Santina; Guarda, Roberto; Prampolini, Franco

    1993-04-01

    The realization of image/video database requires a specific design for both database structures and mass storage management. This issue has addressed the project of the digital image/video database system that has been designed at IBM SEMEA Scientific & Technical Solution Center. Proper database structures have been defined to catalog image/video coding technique with the related parameters, and the description of image/video contents. User workstations and servers are distributed along a local area network. Image/video files are not managed directly by the DBMS server. Because of their wide size, they are stored outside the database on network devices. The database contains the pointers to the image/video files and the description of the storage devices. The system can use different kinds of storage media, organized in a hierarchical structure. Three levels of functions are available to manage the storage resources. The functions of the lower level provide media management. They allow it to catalog devices and to modify device status and device network location. The medium level manages image/video files on a physical basis. It manages file migration between high capacity media and low access time media. The functions of the upper level work on image/video file on a logical basis, as they archive, move and copy image/video data selected by user defined queries. These functions are used to support the implementation of a storage management strategy. The database information about characteristics of both storage devices and coding techniques are used by the third level functions to fit delivery/visualization requirements and to reduce archiving costs.

  17. Sensor-based architecture for medical imaging workflow analysis.

    PubMed

    Silva, Luís A Bastião; Campos, Samuel; Costa, Carlos; Oliveira, José Luis

    2014-08-01

    The growing use of computer systems in medical institutions has been generating a tremendous quantity of data. While these data have a critical role in assisting physicians in the clinical practice, the information that can be extracted goes far beyond this utilization. This article proposes a platform capable of assembling multiple data sources within a medical imaging laboratory, through a network of intelligent sensors. The proposed integration framework follows a SOA hybrid architecture based on an information sensor network, capable of collecting information from several sources in medical imaging laboratories. Currently, the system supports three types of sensors: DICOM repository meta-data, network workflows and examination reports. Each sensor is responsible for converting unstructured information from data sources into a common format that will then be semantically indexed in the framework engine. The platform was deployed in the Cardiology department of a central hospital, allowing identification of processes' characteristics and users' behaviours that were unknown before the utilization of this solution.

  18. GNAT: A Global Network of Astronomical Telescopes

    NASA Astrophysics Data System (ADS)

    Crawford, David L.

    1995-12-01

    Astronomical resources are increasingly directed toward development of very large telescopes, and many facilities are compelled to cease operations of smaller telescopes. A real concern is emerging with respect to issues of access to astronomical imaging systems for the majority of astronomers who will have little or no opportunity to work with the larger telescopes. Further concern is developing with regard to the means for conducting observationally intensive fundamental astronomical imaging programs, such as surveys, monitoring, and standards calibration. One attractive potential solution is a global network of (automated) astronomical telescopes (GNAT). Initial steps have been taken to turn this network into a reality. GNAT has been incorporated as a nonprofit corporation, membership drives have begun and several institutions have joined. The first two open GNAT meetings have now been held to define hardware and software systems, and an order has been placed for the first of the GNAT automated telescopes. In this presentation we discuss the goals and status of GNAT and its implications for astronomical imaging.

  19. Target Recognition Using Neural Networks for Model Deformation Measurements

    NASA Technical Reports Server (NTRS)

    Ross, Richard W.; Hibler, David L.

    1999-01-01

    Optical measurements provide a non-invasive method for measuring deformation of wind tunnel models. Model deformation systems use targets mounted or painted on the surface of the model to identify known positions, and photogrammetric methods are used to calculate 3-D positions of the targets on the model from digital 2-D images. Under ideal conditions, the reflective targets are placed against a dark background and provide high-contrast images, aiding in target recognition. However, glints of light reflecting from the model surface, or reduced contrast caused by light source or model smoothness constraints, can compromise accurate target determination using current algorithmic methods. This paper describes a technique using a neural network and image processing technologies which increases the reliability of target recognition systems. Unlike algorithmic methods, the neural network can be trained to identify the characteristic patterns that distinguish targets from other objects of similar size and appearance and can adapt to changes in lighting and environmental conditions.

  20. Neuro-classification of multi-type Landsat Thematic Mapper data

    NASA Technical Reports Server (NTRS)

    Zhuang, Xin; Engel, Bernard A.; Fernandez, R. N.; Johannsen, Chris J.

    1991-01-01

    Neural networks have been successful in image classification and have shown potential for classifying remotely sensed data. This paper presents classifications of multitype Landsat Thematic Mapper (TM) data using neural networks. The Landsat TM Image for March 23, 1987 with accompanying ground observation data for a study area In Miami County, Indiana, U.S.A. was utilized to assess recognition of crop residues. Principal components and spectral ratio transformations were performed on the TM data. In addition, a layer of the geographic information system (GIS) for the study site was incorporated to generate GIS-enhanced TM data. This paper discusses (1) the performance of neuro-classification on each type of data, (2) how neural networks recognized each type of data as a new image and (3) comparisons of the results for each type of data obtained using neural networks, maximum likelihood, and minimum distance classifiers.

  1. Definition and evaluation of the data-link layer of PACnet

    NASA Astrophysics Data System (ADS)

    Alsafadi, Yasser H.; Martinez, Ralph; Sanders, William H.

    1991-07-01

    PACnet is a 200-500 Mbps dual-ring fiber optic network designed to implement a picture archiving and communication system (PACS) in a hospital environment. The network consists of three channels: an image transfer channel, a command and control channel, and a real-time data channel. An initial network interface unit (NIU) design for PACnet consisted of a functional description of the protocols and NIU major components. In order to develop a demonstration prototype, additional definition of protocol algorithms of each channel is necessary. Using the International Standards Organization/Open Systems Interconnection (ISO/OSI) reference model as a guide, the definition of the data link layer is extended. This definition covers interface service specifications for the two constituent sublayers: logical link control (LLC) and medium access control (MAC). Furthermore, it describes procedures for data transfer, mechanisms of error detection and fault recovery. A performance evaluation study was then made to determine how the network performs under various application scenarios. The performance evaluation study was performed using stochastic activity networks, which can formally describe the network behavior. The results of the study demonstrate the feasibility of PACnet as an integrated image, data, and voice network for PACS.

  2. Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service

    PubMed Central

    Bao, Shunxing; Plassard, Andrew J.; Landman, Bennett A.; Gokhale, Aniruddha

    2017-01-01

    Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based “medical image processing-as-a-service” offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop’s distributed file system. Despite this promise, HBase’s load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split policy, and nearly a six-fold improvement over a commonly available network file system (NFS) approach even for relatively small file sets. Moreover, file access latency is lower than network attached storage. PMID:28884169

  3. White blood cells identification system based on convolutional deep neural learning networks.

    PubMed

    Shahin, A I; Guo, Yanhui; Amin, K M; Sharawi, Amr A

    2017-11-16

    White blood cells (WBCs) differential counting yields valued information about human health and disease. The current developed automated cell morphology equipments perform differential count which is based on blood smear image analysis. Previous identification systems for WBCs consist of successive dependent stages; pre-processing, segmentation, feature extraction, feature selection, and classification. There is a real need to employ deep learning methodologies so that the performance of previous WBCs identification systems can be increased. Classifying small limited datasets through deep learning systems is a major challenge and should be investigated. In this paper, we propose a novel identification system for WBCs based on deep convolutional neural networks. Two methodologies based on transfer learning are followed: transfer learning based on deep activation features and fine-tuning of existed deep networks. Deep acrivation featues are extracted from several pre-trained networks and employed in a traditional identification system. Moreover, a novel end-to-end convolutional deep architecture called "WBCsNet" is proposed and built from scratch. Finally, a limited balanced WBCs dataset classification is performed through the WBCsNet as a pre-trained network. During our experiments, three different public WBCs datasets (2551 images) have been used which contain 5 healthy WBCs types. The overall system accuracy achieved by the proposed WBCsNet is (96.1%) which is more than different transfer learning approaches or even the previous traditional identification system. We also present features visualization for the WBCsNet activation which reflects higher response than the pre-trained activated one. a novel WBCs identification system based on deep learning theory is proposed and a high performance WBCsNet can be employed as a pre-trained network. Copyright © 2017. Published by Elsevier B.V.

  4. Development of automated system based on neural network algorithm for detecting defects on molds installed on casting machines

    NASA Astrophysics Data System (ADS)

    Bazhin, V. Yu; Danilov, I. V.; Petrov, P. A.

    2018-05-01

    During the casting of light alloys and ligatures based on aluminum and magnesium, problems of the qualitative distribution of the metal and its crystallization in the mold arise. To monitor the defects of molds on the casting conveyor, a camera with a resolution of 780 x 580 pixels and a shooting rate of 75 frames per second was selected. Images of molds from casting machines were used as input data for neural network algorithm. On the preparation of a digital database and its analytical evaluation stage, the architecture of the convolutional neural network was chosen for the algorithm. The information flow from the local controller is transferred to the OPC server and then to the SCADA system of foundry. After the training, accuracy of neural network defect recognition was about 95.1% on a validation split. After the training, weight coefficients of the neural network were used on testing split and algorithm had identical accuracy with validation images. The proposed technical solutions make it possible to increase the efficiency of the automated process control system in the foundry by expanding the digital database.

  5. An Archive of Digital Images.

    ERIC Educational Resources Information Center

    Fantini, M.; And Others

    1990-01-01

    Describes the architecture of the prototype of an image management system that has been used to develop an application concerning images of frescoes in the Sistina Chapel in the Vatican. Hardware and software design are described, the use of local area networks (LANs) is discussed, and data organization is explained. (15 references) (LRW)

  6. An algorithm of a real time image tracking system using a camera with pan/tilt motors on an embedded system

    NASA Astrophysics Data System (ADS)

    Kim, Hie-Sik; Nam, Chul; Ha, Kwan-Yong; Ayurzana, Odgeral; Kwon, Jong-Won

    2005-12-01

    The embedded systems have been applied to many fields, including households and industrial sites. The user interface technology with simple display on the screen was implemented more and more. The user demands are increasing and the system has more various applicable fields due to a high penetration rate of the Internet. Therefore, the demand for embedded system is tend to rise. An embedded system for image tracking was implemented. This system is used a fixed IP for the reliable server operation on TCP/IP networks. Using an USB camera on the embedded Linux system developed a real time broadcasting of video image on the Internet. The digital camera is connected at the USB host port of the embedded board. All input images from the video camera are continuously stored as a compressed JPEG file in a directory at the Linux web-server. And each frame image data from web camera is compared for measurement of displacement Vector. That used Block matching algorithm and edge detection algorithm for past speed. And the displacement vector is used at pan/tilt motor control through RS232 serial cable. The embedded board utilized the S3C2410 MPU, which used the ARM 920T core form Samsung. The operating system was ported to embedded Linux kernel and mounted of root file system. And the stored images are sent to the client PC through the web browser. It used the network function of Linux and it developed a program with protocol of the TCP/IP.

  7. One hundred case studies of Asia-Pacific telemedicine using a digital video transport system over a research and education network.

    PubMed

    Shimizu, Shuji; Nakashima, Naoki; Okamura, Koji; Tanaka, Masao

    2009-01-01

    Although the use of video in telemedicine is most helpful, the transmission of high-quality moving images is difficult in conventional systems due to the limitation of network bandwidth and the quality of service. We have established a new system via the academic broadband network that can preserve the original quality and assure smooth movement of the image. Here we report on 100 case studies and discuss the lessons we have learned. Kyushu University Hospital in Fukuoka, Japan, was linked to 53 medical institutions and meeting venues in 13 countries and regions over the Asia-Pacific Advanced Network, an international research and education consortium. The digital video transport system (DVTS), free software that transforms digital video signals directly into Internet Protocol, was installed on a personal computer (PC) with a network bandwidth of 30 Mbps per channel. Between February 2003 and June 2007, 100 telecommunication sessions were held, 94 of which were international and 6 domestic. Furthermore, 47 involved real-time demonstrations and 53 interactive teleconferences using video or PC presentations. Multiple stations were connected in 37 events, and the number of connected stations in total reached 269. The time delay was restricted to 0.3-1.0 seconds between the stations. Participants provided feedback via questionnaires, and with respect to image quality, 509 (68.3%) participants reported "very good," 206 (27.7%) reported "good," 19 (2.6%) reported "poor," and 11 (1.5%) reported "very poor." DVTS is both economical, with a minimal initial investment, and simple to set up, and this is the first time that this advanced system has been used so widely in the Asia-Pacific region. Because the high-speed academic network for research and education is available worldwide, we believe our cutting-edge technology will facilitate medical standardization beyond geographic borders in the world.

  8. Tools to manage the enterprise-wide picture archiving and communications system environment.

    PubMed

    Lannum, L M; Gumpf, S; Piraino, D

    2001-06-01

    The presentation will focus on the implementation and utilization of a central picture archiving and communications system (PACS) network-monitoring tool that allows for enterprise-wide operations management and support of the image distribution network. The MagicWatch (Siemens, Iselin, NJ) PACS/radiology information system (RIS) monitoring station from Siemens has allowed our organization to create a service support structure that has given us proactive control of our environment and has allowed us to meet the service level performance expectations of the users. The Radiology Help Desk has used the MagicWatch PACS monitoring station as an applications support tool that has allowed the group to monitor network activity and individual systems performance at each node. Fast and timely recognition of the effects of single events within the PACS/RIS environment has allowed the group to proactively recognize possible performance issues and resolve problems. The PACS/operations group performs network management control, image storage management, and software distribution management from a single, central point in the enterprise. The MagicWatch station allows for the complete automation of software distribution, installation, and configuration process across all the nodes in the system. The tool has allowed for the standardization of the workstations and provides a central configuration control for the establishment and maintenance of the system standards. This report will describe the PACS management and operation prior to the implementation of the MagicWatch PACS monitoring station and will highlight the operational benefits of a centralized network and system-monitoring tool.

  9. In-situ Planetary Subsurface Imaging System

    NASA Astrophysics Data System (ADS)

    Song, W.; Weber, R. C.; Dimech, J. L.; Kedar, S.; Neal, C. R.; Siegler, M.

    2017-12-01

    Geophysical and seismic instruments are considered the most effective tools for studying the detailed global structures of planetary interiors. A planet's interior bears the geochemical markers of its evolutionary history, as well as its present state of activity, which has direct implications to habitability. On Earth, subsurface imaging often involves massive data collection from hundreds to thousands of geophysical sensors (seismic, acoustic, etc) followed by transfer by hard links or wirelessly to a central location for post processing and computing, which will not be possible in planetary environments due to imposed mission constraints on mass, power, and bandwidth. Emerging opportunities for geophysical exploration of the solar system from Venus to the icy Ocean Worlds of Jupiter and Saturn dictate that subsurface imaging of the deep interior will require substantial data reduction and processing in-situ. The Real-time In-situ Subsurface Imaging (RISI) technology is a mesh network that senses and processes geophysical signals. Instead of data collection then post processing, the mesh network performs the distributed data processing and computing in-situ, and generates an evolving 3D subsurface image in real-time that can be transmitted under bandwidth and resource constraints. Seismic imaging algorithms (including traveltime tomography, ambient noise imaging, and microseismic imaging) have been successfully developed and validated using both synthetic and real-world terrestrial seismic data sets. The prototype hardware system has been implemented and can be extended as a general field instrumentation platform tailored specifically for a wide variety of planetary uses, including crustal mapping, ice and ocean structure, and geothermal systems. The team is applying the RISI technology to real off-world seismic datasets. For example, the Lunar Seismic Profiling Experiment (LSPE) deployed during the Apollo 17 Moon mission consisted of four geophone instruments spaced up to 100 meters apart, which in essence forms a small aperture seismic network. A pattern recognition technique based on Hidden Markov Models was able to characterize this dataset, and we are exploring how the RISI technology can be adapted for this dataset.

  10. Improvement of single wavelength-based Thai jasmine rice identification with elliptic Fourier descriptor and neural network analysis

    NASA Astrophysics Data System (ADS)

    Suwansukho, Kajpanya; Sumriddetchkajorn, Sarun; Buranasiri, Prathan

    2012-11-01

    Instead of considering only the amount of fluorescent signal spatially distributed on the image of milled rice grains this paper shows how our single-wavelength spectral-imaging-based Thai jasmine (KDML105) rice identification system can be improved by analyzing the shape and size of the image of each milled rice variety especially during the image threshold operation. The image of each milled rice variety is expressed as chain codes and elliptic Fourier coefficients. After that, a feed-forward back-propagation neural network model is applied, resulting in an improved average FAR of 11.0% and FRR of 19.0% in identifying KDML105 milled rice from the unwanted four milled rice varieties.

  11. NET: a new framework for the vectorization and examination of network data.

    PubMed

    Lasser, Jana; Katifori, Eleni

    2017-01-01

    The analysis of complex networks both in general and in particular as pertaining to real biological systems has been the focus of intense scientific attention in the past and present. In this paper we introduce two tools that provide fast and efficient means for the processing and quantification of biological networks like Drosophila tracheoles or leaf venation patterns: the Network Extraction Tool ( NET ) to extract data and the Graph-edit-GUI ( GeGUI ) to visualize and modify networks. NET is especially designed for high-throughput semi-automated analysis of biological datasets containing digital images of networks. The framework starts with the segmentation of the image and then proceeds to vectorization using methodologies from optical character recognition. After a series of steps to clean and improve the quality of the extracted data the framework produces a graph in which the network is represented only by its nodes and neighborhood-relations. The final output contains information about the adjacency matrix of the graph, the width of the edges and the positions of the nodes in space. NET also provides tools for statistical analysis of the network properties, such as the number of nodes or total network length. Other, more complex metrics can be calculated by importing the vectorized network to specialized network analysis packages. GeGUI is designed to facilitate manual correction of non-planar networks as these may contain artifacts or spurious junctions due to branches crossing each other. It is tailored for but not limited to the processing of networks from microscopy images of Drosophila tracheoles. The networks extracted by NET closely approximate the network depicted in the original image. NET is fast, yields reproducible results and is able to capture the full geometry of the network, including curved branches. Additionally GeGUI allows easy handling and visualization of the networks.

  12. Optimization of Adaboost Algorithm for Sonar Target Detection in a Multi-Stage ATR System

    NASA Technical Reports Server (NTRS)

    Lin, Tsung Han (Hank)

    2011-01-01

    JPL has developed a multi-stage Automated Target Recognition (ATR) system to locate objects in images. First, input images are preprocessed and sent to a Grayscale Optical Correlator (GOC) filter to identify possible regions-of-interest (ROIs). Second, feature extraction operations are performed using Texton filters and Principal Component Analysis (PCA). Finally, the features are fed to a classifier, to identify ROIs that contain the targets. Previous work used the Feed-forward Back-propagation Neural Network for classification. In this project we investigate a version of Adaboost as a classifier for comparison. The version we used is known as GentleBoost. We used the boosted decision tree as the weak classifier. We have tested our ATR system against real-world sonar images using the Adaboost approach. Results indicate an improvement in performance over a single Neural Network design.

  13. Software-codec-based full motion video conferencing on the PC using visual pattern image sequence coding

    NASA Astrophysics Data System (ADS)

    Barnett, Barry S.; Bovik, Alan C.

    1995-04-01

    This paper presents a real time full motion video conferencing system based on the Visual Pattern Image Sequence Coding (VPISC) software codec. The prototype system hardware is comprised of two personal computers, two camcorders, two frame grabbers, and an ethernet connection. The prototype system software has a simple structure. It runs under the Disk Operating System, and includes a user interface, a video I/O interface, an event driven network interface, and a free running or frame synchronous video codec that also acts as the controller for the video and network interfaces. Two video coders have been tested in this system. Simple implementations of Visual Pattern Image Coding and VPISC have both proven to support full motion video conferencing with good visual quality. Future work will concentrate on expanding this prototype to support the motion compensated version of VPISC, as well as encompassing point-to-point modem I/O and multiple network protocols. The application will be ported to multiple hardware platforms and operating systems. The motivation for developing this prototype system is to demonstrate the practicality of software based real time video codecs. Furthermore, software video codecs are not only cheaper, but are more flexible system solutions because they enable different computer platforms to exchange encoded video information without requiring on-board protocol compatible video codex hardware. Software based solutions enable true low cost video conferencing that fits the `open systems' model of interoperability that is so important for building portable hardware and software applications.

  14. Two Years Experience With A Broadband Cable Network In An 1100-Bed Hospital

    NASA Astrophysics Data System (ADS)

    Cahill, Patrick T.; McCarthy, Robert H.; James, R.; Knowles, R.

    1985-09-01

    Early in 1983, a three-cable broadband network was installed in The New York Hospital-Cornell Medical Center using well-established cable-TV technology. This network was configured in a vertical tree topology. Currently, it extends over thirteen floors vertically and over two city blocks horizontally. It has now survived several major renovations on the various floors of the hospital. This survivability is a result of the siting of the main tree and of the isolation gained for the branches through the strategic placement of amplifiers. This communications system was designed in a modular fashion for later expansion and so that seven types of functions could be supported on the network without the addition of a new functional level disrupting the functions already existing on the system. Thus far, two functions (real-time image consultation and computer sharing) have been implemented, and two other functions (analog image storage and data base management) are in the prototype stage. Perhaps the most significant feature of our experience thus far has been the ease and utility of analog transmission and storage of images. This experience has lead us to postpone and even de-emphasize digital transmission and storage in our future plans.

  15. Cytopathological image analysis using deep-learning networks in microfluidic microscopy.

    PubMed

    Gopakumar, G; Hari Babu, K; Mishra, Deepak; Gorthi, Sai Siva; Sai Subrahmanyam, Gorthi R K

    2017-01-01

    Cytopathologic testing is one of the most critical steps in the diagnosis of diseases, including cancer. However, the task is laborious and demands skill. Associated high cost and low throughput drew considerable interest in automating the testing process. Several neural network architectures were designed to provide human expertise to machines. In this paper, we explore and propose the feasibility of using deep-learning networks for cytopathologic analysis by performing the classification of three important unlabeled, unstained leukemia cell lines (K562, MOLT, and HL60). The cell images used in the classification are captured using a low-cost, high-throughput cell imaging technique: microfluidics-based imaging flow cytometry. We demonstrate that without any conventional fine segmentation followed by explicit feature extraction, the proposed deep-learning algorithms effectively classify the coarsely localized cell lines. We show that the designed deep belief network as well as the deeply pretrained convolutional neural network outperform the conventionally used decision systems and are important in the medical domain, where the availability of labeled data is limited for training. We hope that our work enables the development of a clinically significant high-throughput microfluidic microscopy-based tool for disease screening/triaging, especially in resource-limited settings.

  16. USC orthogonal multiprocessor for image processing with neural networks

    NASA Astrophysics Data System (ADS)

    Hwang, Kai; Panda, Dhabaleswar K.; Haddadi, Navid

    1990-07-01

    This paper presents the architectural features and imaging applications of the Orthogonal MultiProcessor (OMP) system, which is under construction at the University of Southern California with research funding from NSF and assistance from several industrial partners. The prototype OMP is being built with 16 Intel i860 RISC microprocessors and 256 parallel memory modules using custom-designed spanning buses, which are 2-D interleaved and orthogonally accessed without conflicts. The 16-processor OMP prototype is targeted to achieve 430 MIPS and 600 Mflops, which have been verified by simulation experiments based on the design parameters used. The prototype OMP machine will be initially applied for image processing, computer vision, and neural network simulation applications. We summarize important vision and imaging algorithms that can be restructured with neural network models. These algorithms can efficiently run on the OMP hardware with linear speedup. The ultimate goal is to develop a high-performance Visual Computer (Viscom) for integrated low- and high-level image processing and vision tasks.

  17. How acute and chronic alcohol consumption affects brain networks: insights from multimodal neuroimaging.

    PubMed

    Schulte, Tilman; Oberlin, Brandon G; Kareken, David A; Marinkovic, Ksenija; Müller-Oehring, Eva M; Meyerhoff, Dieter J; Tapert, Susan

    2012-12-01

    Multimodal imaging combining 2 or more techniques is becoming increasingly important because no single imaging approach has the capacity to elucidate all clinically relevant characteristics of a network. This review highlights recent advances in multimodal neuroimaging (i.e., combined use and interpretation of data collected through magnetic resonance imaging [MRI], functional MRI, diffusion tensor imaging, positron emission tomography, magnetoencephalography, MR perfusion, and MR spectroscopy methods) that leads to a more comprehensive understanding of how acute and chronic alcohol consumption affect neural networks underlying cognition, emotion, reward processing, and drinking behavior. Several innovative investigators have started utilizing multiple imaging approaches within the same individual to better understand how alcohol influences brain systems, both during intoxication and after years of chronic heavy use. Their findings can help identify mechanism-based therapeutic and pharmacological treatment options, and they may increase the efficacy and cost effectiveness of such treatments by predicting those at greatest risk for relapse. Copyright © 2012 by the Research Society on Alcoholism.

  18. Simultenious binary hash and features learning for image retrieval

    NASA Astrophysics Data System (ADS)

    Frantc, V. A.; Makov, S. V.; Voronin, V. V.; Marchuk, V. I.; Semenishchev, E. A.; Egiazarian, K. O.; Agaian, S.

    2016-05-01

    Content-based image retrieval systems have plenty of applications in modern world. The most important one is the image search by query image or by semantic description. Approaches to this problem are employed in personal photo-collection management systems, web-scale image search engines, medical systems, etc. Automatic analysis of large unlabeled image datasets is virtually impossible without satisfactory image-retrieval technique. It's the main reason why this kind of automatic image processing has attracted so much attention during recent years. Despite rather huge progress in the field, semantically meaningful image retrieval still remains a challenging task. The main issue here is the demand to provide reliable results in short amount of time. This paper addresses the problem by novel technique for simultaneous learning of global image features and binary hash codes. Our approach provide mapping of pixel-based image representation to hash-value space simultaneously trying to save as much of semantic image content as possible. We use deep learning methodology to generate image description with properties of similarity preservation and statistical independence. The main advantage of our approach in contrast to existing is ability to fine-tune retrieval procedure for very specific application which allow us to provide better results in comparison to general techniques. Presented in the paper framework for data- dependent image hashing is based on use two different kinds of neural networks: convolutional neural networks for image description and autoencoder for feature to hash space mapping. Experimental results confirmed that our approach has shown promising results in compare to other state-of-the-art methods.

  19. A novel microaneurysms detection approach based on convolutional neural networks with reinforcement sample learning algorithm.

    PubMed

    Budak, Umit; Şengür, Abdulkadir; Guo, Yanhui; Akbulut, Yaman

    2017-12-01

    Microaneurysms (MAs) are known as early signs of diabetic-retinopathy which are called red lesions in color fundus images. Detection of MAs in fundus images needs highly skilled physicians or eye angiography. Eye angiography is an invasive and expensive procedure. Therefore, an automatic detection system to identify the MAs locations in fundus images is in demand. In this paper, we proposed a system to detect the MAs in colored fundus images. The proposed method composed of three stages. In the first stage, a series of pre-processing steps are used to make the input images more convenient for MAs detection. To this end, green channel decomposition, Gaussian filtering, median filtering, back ground determination, and subtraction operations are applied to input colored fundus images. After pre-processing, a candidate MAs extraction procedure is applied to detect potential regions. A five-stepped procedure is adopted to get the potential MA locations. Finally, deep convolutional neural network (DCNN) with reinforcement sample learning strategy is used to train the proposed system. The DCNN is trained with color image patches which are collected from ground-truth MA locations and non-MA locations. We conducted extensive experiments on ROC dataset to evaluate of our proposal. The results are encouraging.

  20. Revealing topological organization of human brain functional networks with resting-state functional near infrared spectroscopy.

    PubMed

    Niu, Haijing; Wang, Jinhui; Zhao, Tengda; Shu, Ni; He, Yong

    2012-01-01

    The human brain is a highly complex system that can be represented as a structurally interconnected and functionally synchronized network, which assures both the segregation and integration of information processing. Recent studies have demonstrated that a variety of neuroimaging and neurophysiological techniques such as functional magnetic resonance imaging (MRI), diffusion MRI and electroencephalography/magnetoencephalography can be employed to explore the topological organization of human brain networks. However, little is known about whether functional near infrared spectroscopy (fNIRS), a relatively new optical imaging technology, can be used to map functional connectome of the human brain and reveal meaningful and reproducible topological characteristics. We utilized resting-state fNIRS (R-fNIRS) to investigate the topological organization of human brain functional networks in 15 healthy adults. Brain networks were constructed by thresholding the temporal correlation matrices of 46 channels and analyzed using graph-theory approaches. We found that the functional brain network derived from R-fNIRS data had efficient small-world properties, significant hierarchical modular structure and highly connected hubs. These results were highly reproducible both across participants and over time and were consistent with previous findings based on other functional imaging techniques. Our results confirmed the feasibility and validity of using graph-theory approaches in conjunction with optical imaging techniques to explore the topological organization of human brain networks. These results may expand a methodological framework for utilizing fNIRS to study functional network changes that occur in association with development, aging and neurological and psychiatric disorders.

  1. Fully automatic cervical vertebrae segmentation framework for X-ray images.

    PubMed

    Al Arif, S M Masudur Rahman; Knapp, Karen; Slabaugh, Greg

    2018-04-01

    The cervical spine is a highly flexible anatomy and therefore vulnerable to injuries. Unfortunately, a large number of injuries in lateral cervical X-ray images remain undiagnosed due to human errors. Computer-aided injury detection has the potential to reduce the risk of misdiagnosis. Towards building an automatic injury detection system, in this paper, we propose a deep learning-based fully automatic framework for segmentation of cervical vertebrae in X-ray images. The framework first localizes the spinal region in the image using a deep fully convolutional neural network. Then vertebra centers are localized using a novel deep probabilistic spatial regression network. Finally, a novel shape-aware deep segmentation network is used to segment the vertebrae in the image. The framework can take an X-ray image and produce a vertebrae segmentation result without any manual intervention. Each block of the fully automatic framework has been trained on a set of 124 X-ray images and tested on another 172 images, all collected from real-life hospital emergency rooms. A Dice similarity coefficient of 0.84 and a shape error of 1.69 mm have been achieved. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. A study for watermark methods appropriate to medical images.

    PubMed

    Cho, Y; Ahn, B; Kim, J S; Kim, I Y; Kim, S I

    2001-06-01

    The network system, including the picture archiving and communication system (PACS), is essential in hospital and medical imaging fields these days. Many medical images are accessed and processed on the web, as well as in PACS. Therefore, any possible accidents caused by the illegal modification of medical images must be prevented. Digital image watermark techniques have been proposed as a method to protect against illegal copying or modification of copyrighted material. Invisible signatures made by a digital image watermarking technique can be a solution to these problems. However, medical images have some different characteristics from normal digital images in that one must not corrupt the information contained in the original medical images. In this study, we suggest modified watermark methods appropriate for medical image processing and communication system that prevent clinically important data contained in original images from being corrupted.

  3. Ultrahigh-speed ultrahigh-resolution adaptive optics: optical coherence tomography system for in-vivo small animal retinal imaging

    NASA Astrophysics Data System (ADS)

    Jian, Yifan; Xu, Jing; Zawadzki, Robert J.; Sarunic, Marinko V.

    2013-03-01

    Small animal models of human retinal diseases are a critical component of vision research. In this report, we present an ultrahigh-resolution ultrahigh-speed adaptive optics optical coherence tomography (AO-OCT) system for small animal retinal imaging (mouse, fish, etc.). We adapted our imaging system to different types of small animals in accordance with the optical properties of their eyes. Results of AO-OCT images of small animal retinas acquired with AO correction are presented. Cellular structures including nerve fiber bundles, capillary networks and detailed double-cone photoreceptors are visualized.

  4. Comprehensive Digital Imaging Network Project At Georgetown University Hospital

    NASA Astrophysics Data System (ADS)

    Mun, Seong K.; Stauffer, Douglas; Zeman, Robert; Benson, Harold; Wang, Paul; Allman, Robert

    1987-10-01

    The radiology practice is going through rapid changes due to the introduction of state-of-the-art computed based technologies. For the last twenty years we have witnessed the introduction of many new medical diagnostic imaging systems such as x-ray computed tomo-graphy, digital subtraction angiography (DSA), computerized nuclear medicine, single pho-ton emission computed tomography (SPECT), positron emission tomography (PET) and more re-cently, computerized digital radiography and nuclear magnetic resonance imaging (MRI). Other than the imaging systems, there has been a steady introduction of computed based information systems for radiology departments and hospitals.

  5. Accurate segmentation of lung fields on chest radiographs using deep convolutional networks

    NASA Astrophysics Data System (ADS)

    Arbabshirani, Mohammad R.; Dallal, Ahmed H.; Agarwal, Chirag; Patel, Aalpan; Moore, Gregory

    2017-02-01

    Accurate segmentation of lung fields on chest radiographs is the primary step for computer-aided detection of various conditions such as lung cancer and tuberculosis. The size, shape and texture of lung fields are key parameters for chest X-ray (CXR) based lung disease diagnosis in which the lung field segmentation is a significant primary step. Although many methods have been proposed for this problem, lung field segmentation remains as a challenge. In recent years, deep learning has shown state of the art performance in many visual tasks such as object detection, image classification and semantic image segmentation. In this study, we propose a deep convolutional neural network (CNN) framework for segmentation of lung fields. The algorithm was developed and tested on 167 clinical posterior-anterior (PA) CXR images collected retrospectively from picture archiving and communication system (PACS) of Geisinger Health System. The proposed multi-scale network is composed of five convolutional and two fully connected layers. The framework achieved IOU (intersection over union) of 0.96 on the testing dataset as compared to manual segmentation. The suggested framework outperforms state of the art registration-based segmentation by a significant margin. To our knowledge, this is the first deep learning based study of lung field segmentation on CXR images developed on a heterogeneous clinical dataset. The results suggest that convolutional neural networks could be employed reliably for lung field segmentation.

  6. 3D quantitative phase imaging of neural networks using WDT

    NASA Astrophysics Data System (ADS)

    Kim, Taewoo; Liu, S. C.; Iyer, Raj; Gillette, Martha U.; Popescu, Gabriel

    2015-03-01

    White-light diffraction tomography (WDT) is a recently developed 3D imaging technique based on a quantitative phase imaging system called spatial light interference microscopy (SLIM). The technique has achieved a sub-micron resolution in all three directions with high sensitivity granted by the low-coherence of a white-light source. Demonstrations of the technique on single cell imaging have been presented previously; however, imaging on any larger sample, including a cluster of cells, has not been demonstrated using the technique. Neurons in an animal body form a highly complex and spatially organized 3D structure, which can be characterized by neuronal networks or circuits. Currently, the most common method of studying the 3D structure of neuron networks is by using a confocal fluorescence microscope, which requires fluorescence tagging with either transient membrane dyes or after fixation of the cells. Therefore, studies on neurons are often limited to samples that are chemically treated and/or dead. WDT presents a solution for imaging live neuron networks with a high spatial and temporal resolution, because it is a 3D imaging method that is label-free and non-invasive. Using this method, a mouse or rat hippocampal neuron culture and a mouse dorsal root ganglion (DRG) neuron culture have been imaged in order to see the extension of processes between the cells in 3D. Furthermore, the tomogram is compared with a confocal fluorescence image in order to investigate the 3D structure at synapses.

  7. Deep learning based state recognition of substation switches

    NASA Astrophysics Data System (ADS)

    Wang, Jin

    2018-06-01

    Different from the traditional method which recognize the state of substation switches based on the running rules of electrical power system, this work proposes a novel convolutional neuron network-based state recognition approach of substation switches. Inspired by the theory of transfer learning, we first establish a convolutional neuron network model trained on the large-scale image set ILSVRC2012, then the restricted Boltzmann machine is employed to replace the full connected layer of the convolutional neuron network and trained on our small image dataset of 110kV substation switches to get a stronger model. Experiments conducted on our image dataset of 110kV substation switches show that, the proposed approach can be applicable to the substation to reduce the running cost and implement the real unattended operation.

  8. Content-based image retrieval on mobile devices

    NASA Astrophysics Data System (ADS)

    Ahmad, Iftikhar; Abdullah, Shafaq; Kiranyaz, Serkan; Gabbouj, Moncef

    2005-03-01

    Content-based image retrieval area possesses a tremendous potential for exploration and utilization equally for researchers and people in industry due to its promising results. Expeditious retrieval of desired images requires indexing of the content in large-scale databases along with extraction of low-level features based on the content of these images. With the recent advances in wireless communication technology and availability of multimedia capable phones it has become vital to enable query operation in image databases and retrieve results based on the image content. In this paper we present a content-based image retrieval system for mobile platforms, providing the capability of content-based query to any mobile device that supports Java platform. The system consists of light-weight client application running on a Java enabled device and a server containing a servlet running inside a Java enabled web server. The server responds to image query using efficient native code from selected image database. The client application, running on a mobile phone, is able to initiate a query request, which is handled by a servlet in the server for finding closest match to the queried image. The retrieved results are transmitted over mobile network and images are displayed on the mobile phone. We conclude that such system serves as a basis of content-based information retrieval on wireless devices and needs to cope up with factors such as constraints on hand-held devices and reduced network bandwidth available in mobile environments.

  9. 75 FR 77885 - Government-Owned Inventions; Availability for Licensing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-14

    ... of federally-funded research and development. Foreign patent applications are filed on selected... applications. Software System for Quantitative Assessment of Vasculature in Three Dimensional Images... three dimensional vascular networks from medical and basic research images. Deregulation of angiogenesis...

  10. Optimization with artificial neural network systems - A mapping principle and a comparison to gradient based methods

    NASA Technical Reports Server (NTRS)

    Leong, Harrison Monfook

    1988-01-01

    General formulae for mapping optimization problems into systems of ordinary differential equations associated with artificial neural networks are presented. A comparison is made to optimization using gradient-search methods. The performance measure is the settling time from an initial state to a target state. A simple analytical example illustrates a situation where dynamical systems representing artificial neural network methods would settle faster than those representing gradient-search. Settling time was investigated for a more complicated optimization problem using computer simulations. The problem was a simplified version of a problem in medical imaging: determining loci of cerebral activity from electromagnetic measurements at the scalp. The simulations showed that gradient based systems typically settled 50 to 100 times faster than systems based on current neural network optimization methods.

  11. Implementation of pulse-coupled neural networks in a CNAPS environment.

    PubMed

    Kinser, J M; Lindblad, T

    1999-01-01

    Pulse coupled neural networks (PCNN's) are biologically inspired algorithms very well suited for image/signal preprocessing. While several analog implementations are proposed we suggest a digital implementation in an existing environment, the connected network of adapted processors system (CNAPS). The reason for this is two fold. First, CNAPS is a commercially available chip which has been used for several neural-network implementations. Second, the PCNN is, in almost all applications, a very efficient component of a system requiring subsequent and additional processing. This may include gating, Fourier transforms, neural classifiers, data mining, etc, with or without feedback to the PCNN.

  12. Cloud Image Data Center for Healthcare Network in Taiwan.

    PubMed

    Weng, Shao-Jen; Lai, Lai-Shiun; Gotcher, Donald; Wu, Hsin-Hung; Xu, Yeong-Yuh; Yang, Ching-Wen

    2016-04-01

    This paper investigates how a healthcare network in Taiwan uses a practical cloud image data center (CIDC) to communicate with its constituent hospital branches. A case study approach was used. The study was carried out in the central region of Taiwan, with four hospitals belonging to the Veterans Hospital healthcare network. The CIDC provides synchronous and asynchronous consultation among these branches. It provides storage, platforms, and services on demand to the hospitals. Any branch-client can pull up the patient's medical images from any hospital off this cloud. Patients can be examined at the branches, and the images and reports can be further evaluated by physicians in the main Taichung Veterans General Hospital (TVGH) to enhance the usage and efficiency of equipment in the various branches, thereby shortening the waiting time of patients. The performance of the CIDC over 5 years shows: (1) the total number of cross-hospital images accessed with CDC in the branches was 132,712; and (2) TVGH assisted the branches in keying in image reports using the CIDC 4,424 times; and (3) Implementation of the system has improved management, efficiency, speed and quality of care. Therefore, the results lead to the recommendation of continuing and expanding the cloud computing architecture to improve information sharing among branches in the healthcare network.

  13. Surveillance and reconnaissance ground system architecture

    NASA Astrophysics Data System (ADS)

    Devambez, Francois

    2001-12-01

    Modern conflicts induces various modes of deployment, due to the type of conflict, the type of mission, and phase of conflict. It is then impossible to define fixed architecture systems for surveillance ground segments. Thales has developed a structure for a ground segment based on the operational functions required, and on the definition of modules and networks. Theses modules are software and hardware modules, including communications and networks. This ground segment is called MGS (Modular Ground Segment), and is intended for use in airborne reconnaissance systems, surveillance systems, and U.A.V. systems. Main parameters for the definition of a modular ground image exploitation system are : Compliance with various operational configurations, Easy adaptation to the evolution of theses configurations, Interoperability with NATO and multinational forces, Security, Multi-sensors, multi-platforms capabilities, Technical modularity, Evolutivity Reduction of life cycle cost The general performances of the MGS are presented : type of sensors, acquisition process, exploitation of images, report generation, data base management, dissemination, interface with C4I. The MGS is then described as a set of hardware and software modules, and their organization to build numerous operational configurations. Architectures are from minimal configuration intended for a mono-sensor image exploitation system, to a full image intelligence center, for a multilevel exploitation of multi-sensor.

  14. Electronic system with memristive synapses for pattern recognition

    PubMed Central

    Park, Sangsu; Chu, Myonglae; Kim, Jongin; Noh, Jinwoo; Jeon, Moongu; Hun Lee, Byoung; Hwang, Hyunsang; Lee, Boreom; Lee, Byung-geun

    2015-01-01

    Memristive synapses, the most promising passive devices for synaptic interconnections in artificial neural networks, are the driving force behind recent research on hardware neural networks. Despite significant efforts to utilize memristive synapses, progress to date has only shown the possibility of building a neural network system that can classify simple image patterns. In this article, we report a high-density cross-point memristive synapse array with improved synaptic characteristics. The proposed PCMO-based memristive synapse exhibits the necessary gradual and symmetrical conductance changes, and has been successfully adapted to a neural network system. The system learns, and later recognizes, the human thought pattern corresponding to three vowels, i.e. /a /, /i /, and /u/, using electroencephalography signals generated while a subject imagines speaking vowels. Our successful demonstration of a neural network system for EEG pattern recognition is likely to intrigue many researchers and stimulate a new research direction. PMID:25941950

  15. Multicell migration tracking within angiogenic networks by deep learning-based segmentation and augmented Bayesian filtering.

    PubMed

    Wang, Mengmeng; Ong, Lee-Ling Sharon; Dauwels, Justin; Asada, H Harry

    2018-04-01

    Cell migration is a key feature for living organisms. Image analysis tools are useful in studying cell migration in three-dimensional (3-D) in vitro environments. We consider angiogenic vessels formed in 3-D microfluidic devices (MFDs) and develop an image analysis system to extract cell behaviors from experimental phase-contrast microscopy image sequences. The proposed system initializes tracks with the end-point confocal nuclei coordinates. We apply convolutional neural networks to detect cell candidates and combine backward Kalman filtering with multiple hypothesis tracking to link the cell candidates at each time step. These hypotheses incorporate prior knowledge on vessel formation and cell proliferation rates. The association accuracy reaches 86.4% for the proposed algorithm, indicating that the proposed system is able to associate cells more accurately than existing approaches. Cell culture experiments in 3-D MFDs have shown considerable promise for improving biology research. The proposed system is expected to be a useful quantitative tool for potential microscopy problems of MFDs.

  16. Development of Sorting System for Fishes by Feed-forward Neural Networks Using Rotation Invariant Features

    NASA Astrophysics Data System (ADS)

    Shiraishi, Yuhki; Takeda, Fumiaki

    In this research, we have developed a sorting system for fishes, which is comprised of a conveyance part, a capturing image part, and a sorting part. In the conveyance part, we have developed an independent conveyance system in order to separate one fish from an intertwined group of fishes. After the image of the separated fish is captured in the capturing part, a rotation invariant feature is extracted using two-dimensional fast Fourier transform, which is the mean value of the power spectrum with the same distance from the origin in the spectrum field. After that, the fishes are classified by three-layered feed-forward neural networks. The experimental results show that the developed system classifies three kinds of fishes captured in various angles with the classification ratio of 98.95% for 1044 captured images of five fishes. The other experimental results show the classification ratio of 90.7% for 300 fishes by 10-fold cross validation method.

  17. The Assessment of Neurological Systems with Functional Imaging

    ERIC Educational Resources Information Center

    Eidelberg, David

    2007-01-01

    In recent years a number of multivariate approaches have been introduced to map neural systems in health and disease. In this review, we focus on spatial covariance methods applied to functional imaging data to identify patterns of regional activity associated with behavior. In the rest state, this form of network analysis can be used to detect…

  18. Automated Detection of Clinically Significant Prostate Cancer in mp-MRI Images Based on an End-to-End Deep Neural Network.

    PubMed

    Wang, Zhiwei; Liu, Chaoyue; Cheng, Danpeng; Wang, Liang; Yang, Xin; Cheng, Kwang-Ting

    2018-05-01

    Automated methods for detecting clinically significant (CS) prostate cancer (PCa) in multi-parameter magnetic resonance images (mp-MRI) are of high demand. Existing methods typically employ several separate steps, each of which is optimized individually without considering the error tolerance of other steps. As a result, they could either involve unnecessary computational cost or suffer from errors accumulated over steps. In this paper, we present an automated CS PCa detection system, where all steps are optimized jointly in an end-to-end trainable deep neural network. The proposed neural network consists of concatenated subnets: 1) a novel tissue deformation network (TDN) for automated prostate detection and multimodal registration and 2) a dual-path convolutional neural network (CNN) for CS PCa detection. Three types of loss functions, i.e., classification loss, inconsistency loss, and overlap loss, are employed for optimizing all parameters of the proposed TDN and CNN. In the training phase, the two nets mutually affect each other and effectively guide registration and extraction of representative CS PCa-relevant features to achieve results with sufficient accuracy. The entire network is trained in a weakly supervised manner by providing only image-level annotations (i.e., presence/absence of PCa) without exact priors of lesions' locations. Compared with most existing systems which require supervised labels, e.g., manual delineation of PCa lesions, it is much more convenient for clinical usage. Comprehensive evaluation based on fivefold cross validation using 360 patient data demonstrates that our system achieves a high accuracy for CS PCa detection, i.e., a sensitivity of 0.6374 and 0.8978 at 0.1 and 1 false positives per normal/benign patient.

  19. Nonlinear microrheology and molecular imaging to map microscale deformations of entangled DNA networks

    NASA Astrophysics Data System (ADS)

    Wu, Tsai-Chin; Anderson, Rae

    We use active microrheology coupled to single-molecule fluorescence imaging to elucidate the microscale dynamics of entangled DNA. DNA naturally exists in a wide range of lengths and topologies, and is often confined in cell nucleui, forming highly concentrated and entangled biopolymer networks. Thus, DNA is the model polymer for understanding entangled polymer dynamics as well as the crowded environment of cells. These networks display complex viscoelastic properties that are not well understood, especially at the molecular-level and in response to nonlinear perturbations. Specifically, how microscopic stresses and strains propagate through entangled networks, and what molecular deformations lead to the network stress responses are unknown. To answer these important questions, we optically drive a microsphere through entangled DNA, perturbing the system far from equilibrium, while measuring the resistive force the DNA exerts on the bead during and after bead motion. We simultaneously image single fluorescent-labeled DNA molecules throughout the network to directly link the microscale stress response to molecular deformations. We characterize the deformation of the network from the molecular-level to the mesoscale, and map the stress propagation throughout the network. We further study the impact of DNA length (11 - 115 kbp) and topology (linear vs ring DNA) on deformation and propagation dynamics, exploring key nonlinear features such as tube dilation and power-law relaxation.

  20. A Wireless Sensor Network for Vineyard Monitoring That Uses Image Processing

    PubMed Central

    Lloret, Jaime; Bosch, Ignacio; Sendra, Sandra; Serrano, Arturo

    2011-01-01

    The first step to detect when a vineyard has any type of deficiency, pest or disease is to observe its stems, its grapes and/or its leaves. To place a sensor in each leaf of every vineyard is obviously not feasible in terms of cost and deployment. We should thus look for new methods to detect these symptoms precisely and economically. In this paper, we present a wireless sensor network where each sensor node takes images from the field and internally uses image processing techniques to detect any unusual status in the leaves. This symptom could be caused by a deficiency, pest, disease or other harmful agent. When it is detected, the sensor node sends a message to a sink node through the wireless sensor network in order to notify the problem to the farmer. The wireless sensor uses the IEEE 802.11 a/b/g/n standard, which allows connections from large distances in open air. This paper describes the wireless sensor network design, the wireless sensor deployment, how the node processes the images in order to monitor the vineyard, and the sensor network traffic obtained from a test bed performed in a flat vineyard in Spain. Although the system is not able to distinguish between deficiency, pest, disease or other harmful agents, a symptoms image database and a neuronal network could be added in order learn from the experience and provide an accurate problem diagnosis. PMID:22163948

  1. A wireless sensor network for vineyard monitoring that uses image processing.

    PubMed

    Lloret, Jaime; Bosch, Ignacio; Sendra, Sandra; Serrano, Arturo

    2011-01-01

    The first step to detect when a vineyard has any type of deficiency, pest or disease is to observe its stems, its grapes and/or its leaves. To place a sensor in each leaf of every vineyard is obviously not feasible in terms of cost and deployment. We should thus look for new methods to detect these symptoms precisely and economically. In this paper, we present a wireless sensor network where each sensor node takes images from the field and internally uses image processing techniques to detect any unusual status in the leaves. This symptom could be caused by a deficiency, pest, disease or other harmful agent. When it is detected, the sensor node sends a message to a sink node through the wireless sensor network in order to notify the problem to the farmer. The wireless sensor uses the IEEE 802.11 a/b/g/n standard, which allows connections from large distances in open air. This paper describes the wireless sensor network design, the wireless sensor deployment, how the node processes the images in order to monitor the vineyard, and the sensor network traffic obtained from a test bed performed in a flat vineyard in Spain. Although the system is not able to distinguish between deficiency, pest, disease or other harmful agents, a symptoms image database and a neuronal network could be added in order learn from the experience and provide an accurate problem diagnosis.

  2. Computer-aided diagnosis workstation and network system for chest diagnosis based on multislice CT images

    NASA Astrophysics Data System (ADS)

    Satoh, Hitoshi; Niki, Noboru; Mori, Kiyoshi; Eguchi, Kenji; Kaneko, Masahiro; Kakinuma, Ryutarou; Moriyama, Noriyuki; Ohmatsu, Hironobu; Masuda, Hideo; Machida, Suguru

    2007-03-01

    Multislice CT scanner advanced remarkably at the speed at which the chest CT images were acquired for mass screening. Mass screening based on multislice CT images requires a considerable number of images to be read. It is this time-consuming step that makes the use of helical CT for mass screening impractical at present. To overcome this problem, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images and a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification. Moreover, we have provided diagnostic assistance methods to medical screening specialists by using a lung cancer screening algorithm built into mobile helical CT scanner for the lung cancer mass screening done in the region without the hospital. We also have developed electronic medical recording system and prototype internet system for the community health in two or more regions by using the Virtual Private Network router and Biometric fingerprint authentication system and Biometric face authentication system for safety of medical information. Based on these diagnostic assistance methods, we have now developed a new computer-aided workstation and database that can display suspected lesions three-dimensionally in a short time. This paper describes basic studies that have been conducted to evaluate this new system.

  3. Detecting spatial defects in colored patterns using self-oscillating gels

    NASA Astrophysics Data System (ADS)

    Fang, Yan; Yashin, Victor V.; Dickerson, Samuel J.; Balazs, Anna C.

    2018-06-01

    With the growing demand for wearable computers, there is a need for material systems that can perform computational tasks without relying on external electrical power. Using theory and simulation, we design a material system that "computes" by integrating the inherent behavior of self-oscillating gels undergoing the Belousov-Zhabotinsky (BZ) reaction and piezoelectric (PZ) plates. These "BZ-PZ" units are connected electrically to form a coupled oscillator network, which displays specific modes of synchronization. We exploit this attribute in employing multiple BZ-PZ networks to perform pattern matching on complex multi-dimensional data, such as colored images. By decomposing a colored image into sets of binary vectors, we use each BZ-PZ network, or "channel," to store distinct information about the color and the shape of the image and perform the pattern matching operation. Our simulation results indicate that the multi-channel BZ-PZ device can detect subtle differences between the input and stored patterns, such as the color variation of one pixel or a small change in the shape of an object. To demonstrate a practical application, we utilize our system to process a colored Quick Response code and show its potential in cryptography and steganography.

  4. Optical neural network system for pose determination of spinning satellites

    NASA Technical Reports Server (NTRS)

    Lee, Andrew; Casasent, David

    1990-01-01

    An optical neural network architecture and algorithm based on a Hopfield optimization network are presented for multitarget tracking. This tracker utilizes a neuron for every possible target track, and a quadratic energy function of neural activities which is minimized using gradient descent neural evolution. The neural net tracker is demonstrated as part of a system for determining position and orientation (pose) of spinning satellites with respect to a robotic spacecraft. The input to the system is time sequence video from a single camera. Novelty detection and filtering are utilized to locate and segment novel regions from the input images. The neural net multitarget tracker determines the correspondences (or tracks) of the novel regions as a function of time, and hence the paths of object (satellite) parts. The path traced out by a given part or region is approximately elliptical in image space, and the position, shape and orientation of the ellipse are functions of the satellite geometry and its pose. Having a geometric model of the satellite, and the elliptical path of a part in image space, the three-dimensional pose of the satellite is determined. Digital simulation results using this algorithm are presented for various satellite poses and lighting conditions.

  5. Thermalnet: a Deep Convolutional Network for Synthetic Thermal Image Generation

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.; Gorbatsevich, V. S.; Mizginov, V. A.

    2017-05-01

    Deep convolutional neural networks have dramatically changed the landscape of the modern computer vision. Nowadays methods based on deep neural networks show the best performance among image recognition and object detection algorithms. While polishing of network architectures received a lot of scholar attention, from the practical point of view the preparation of a large image dataset for a successful training of a neural network became one of major challenges. This challenge is particularly profound for image recognition in wavelengths lying outside the visible spectrum. For example no infrared or radar image datasets large enough for successful training of a deep neural network are available to date in public domain. Recent advances of deep neural networks prove that they are also capable to do arbitrary image transformations such as super-resolution image generation, grayscale image colorisation and imitation of style of a given artist. Thus a natural question arise: how could be deep neural networks used for augmentation of existing large image datasets? This paper is focused on the development of the Thermalnet deep convolutional neural network for augmentation of existing large visible image datasets with synthetic thermal images. The Thermalnet network architecture is inspired by colorisation deep neural networks.

  6. Negative functional coupling between the right fronto-parietal and limbic resting state networks predicts increased self-control and later substance use onset in adolescence.

    PubMed

    Lee, Tae-Ho; Telzer, Eva H

    2016-08-01

    Recent developmental brain imaging studies have demonstrated that negatively coupled prefrontal-limbic circuitry implicates the maturation of brain development in adolescents. Using resting-state functional magnetic resonance imaging (rs-fMRI) and independent component analysis (ICA), the present study examined functional network coupling between prefrontal and limbic systems and links to self-control and substance use onset in adolescents. Results suggest that negative network coupling (anti-correlated temporal dynamics) between the right fronto-parietal and limbic resting state networks is associated with greater self-control and later substance use onset in adolescents. These findings increase our understanding of the developmental importance of prefrontal-limbic circuitry for adolescent substance use at the resting-state network level. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Mars Digital Image Model 2.1 Control Network

    NASA Technical Reports Server (NTRS)

    Archinal, B. A.; Kirk, R. L.; Duxbury, T. C.; Lee, E. M.; Sucharski, R.; Cook, D.

    2003-01-01

    USGS is currently preparing a new version of its global Mars digital image mosaic, which will be known as MDIM 2.1. As part of this process we are completing a new photogrammetric solution of the global Mars control network. This is an improved version of the network established earlier by RAND and USGS personnel, as partially described previously. MDIM 2.1 will have many improvements over earlier Viking Orbiter (VO) global mosaics. Geometrically, it will be an orthoimage product, draped on Mars Orbiter Laser Altimeter (MOLA) derived topography, thus accounting properly for the commonly oblique VO imagery. Through the network being described here it will be tied to the newly defined IAU/IAG 2000 Mars coordinate system via ties to MOLA data. Thus, MDIM 2.1 will provide complete global orthorectified imagery coverage of Mars at the resolution of 1/256 deg of MDIM 2.0, and be compatible with MOLA and other products produced in the current coordinate system.

  8. Influence of quality of images recorded in far infrared on pattern recognition based on neural networks and Eigenfaces algorithm

    NASA Astrophysics Data System (ADS)

    Jelen, Lukasz; Kobel, Joanna; Podbielska, Halina

    2003-11-01

    This paper discusses the possibility of exploiting of the tennovision registration and artificial neural networks for facial recognition systems. A biometric system that is able to identify people from thermograms is presented. To identify a person we used the Eigenfaces algorithm. For the face detection in the picture the backpropagation neural network was designed. For this purpose thermograms of 10 people in various external conditions were studies. The Eigenfaces algorithm calculated an average face and then the set of characteristic features for each studied person was produced. The neural network has to detect the face in the image before it actually can be identified. We used five hidden layers for that purpose. It was shown that the errors in recognition depend on the feature extraction, for low quality pictures the error was so high as 30%. However, for pictures with a good feature extraction the results of proper identification higher then 90%, were obtained.

  9. Automated extraction of metadata from remotely sensed satellite imagery

    NASA Technical Reports Server (NTRS)

    Cromp, Robert F.

    1991-01-01

    The paper discusses research in the Intelligent Data Management project at the NASA/Goddard Space Flight Center, with emphasis on recent improvements in low-level feature detection algorithms for performing real-time characterization of images. Images, including MSS and TM data, are characterized using neural networks and the interpretation of the neural network output by an expert system for subsequent archiving in an object-oriented data base. The data show the applicability of this approach to different arrangements of low-level remote sensing channels. The technique works well when the neural network is trained on data similar to the data used for testing.

  10. Venus - Complex Network of Narrow Fractures Near Hestia Rupes Region

    NASA Image and Video Library

    1996-10-23

    This image from NASA Magellan spacecraft covers region near Hestia Rupes on the northwestern corner of Aphrodite Terra. The complex network of narrow (<1 kilometer) fractures in the center of the image extends for approximately 50 kilometers (31 miles). This network exhibits tributary-like branches similar to those observed in river systems on Earth. However, the angular intersections of tributaries suggest tectonic control. These features appear to be due to drainage of lava along preexisting fractures and subsequent collapse of the surface. The underlying tectonic fabric can be observed in the northeast trending ridges which predate the plains. http://photojournal.jpl.nasa.gov/catalog/PIA00469

  11. 75 FR 77882 - Government-Owned Inventions; Availability for Licensing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-14

    ... of federally-funded research and development. Foreign patent applications are filed on selected... applications. Software System for Quantitative Assessment of Vasculature in Three Dimensional Images... vascular networks from medical and basic research images. Deregulation of angiogenesis plays a major role...

  12. Eliminating chromatic aberration of lens and recognition of thermal images with artificial intelligence applications

    NASA Astrophysics Data System (ADS)

    Fang, Yi-Chin; Wu, Bo-Wen; Lin, Wei-Tang; Jon, Jen-Liung

    2007-11-01

    Resolution and color are two main directions for measuring optical digital image, but it will be a hard work to integral improve the image quality of optical system, because there are many limits such as size, materials and environment of optical system design. Therefore, it is important to let blurred images as aberrations and noises or due to the characteristics of human vision as far distance and small targets to raise the capability of image recognition with artificial intelligence such as genetic algorithm and neural network in the condition that decreasing color aberration of optical system and not to increase complex calculation in the image processes. This study could achieve the goal of integral, economically and effectively to improve recognition and classification in low quality image from optical system and environment.

  13. Electronic Document Supply Systems.

    ERIC Educational Resources Information Center

    Cawkell, A. E.

    1991-01-01

    Describes electronic document delivery systems used by libraries and document image processing systems used for business purposes. Topics discussed include technical specifications; analogue read-only laser videodiscs; compact discs and CD-ROM; WORM; facsimile; ADONIS (Article Delivery over Network Information System); DOCDEL; and systems at the…

  14. From fuzzy recurrence plots to scalable recurrence networks of time series

    NASA Astrophysics Data System (ADS)

    Pham, Tuan D.

    2017-04-01

    Recurrence networks, which are derived from recurrence plots of nonlinear time series, enable the extraction of hidden features of complex dynamical systems. Because fuzzy recurrence plots are represented as grayscale images, this paper presents a variety of texture features that can be extracted from fuzzy recurrence plots. Based on the notion of fuzzy recurrence plots, defuzzified, undirected, and unweighted recurrence networks are introduced. Network measures can be computed for defuzzified recurrence networks that are scalable to meet the demand for the network-based analysis of big data.

  15. Two Unipolar Terminal-Attractor-Based Associative Memories

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang; Wu, Chwan-Hwa

    1995-01-01

    Two unipolar mathematical models of electronic neural network functioning as terminal-attractor-based associative memory (TABAM) developed. Models comprise sets of equations describing interactions between time-varying inputs and outputs of neural-network memory, regarded as dynamical system. Simplifies design and operation of optoelectronic processor to implement TABAM performing associative recall of images. TABAM concept described in "Optoelectronic Terminal-Attractor-Based Associative Memory" (NPO-18790). Experimental optoelectronic apparatus that performed associative recall of binary images described in "Optoelectronic Inner-Product Neural Associative Memory" (NPO-18491).

  16. Magnetic resonance imaging research in sub-Saharan Africa: challenges and satellite-based networking implementation.

    PubMed

    Latourette, Matthew T; Siebert, James E; Barto, Robert J; Marable, Kenneth L; Muyepa, Anthony; Hammond, Colleen A; Potchen, Michael J; Kampondeni, Samuel D; Taylor, Terrie E

    2011-08-01

    As part of an NIH-funded study of malaria pathogenesis, a magnetic resonance (MR) imaging research facility was established in Blantyre, Malaŵi to enhance the clinical characterization of pediatric patients with cerebral malaria through application of neurological MR methods. The research program requires daily transmission of MR studies to Michigan State University (MSU) for clinical research interpretation and quantitative post-processing. An intercontinental satellite-based network was implemented for transmission of MR image data in Digital Imaging and Communications in Medicine (DICOM) format, research data collection, project communications, and remote systems administration. Satellite Internet service costs limited the bandwidth to symmetrical 384 kbit/s. DICOM routers deployed at both the Malaŵi MRI facility and MSU manage the end-to-end encrypted compressed data transmission. Network performance between DICOM routers was measured while transmitting both mixed clinical MR studies and synthetic studies. Effective network latency averaged 715 ms. Within a mix of clinical MR studies, the average transmission time for a 256 × 256 image was ~2.25 and ~6.25 s for a 512 × 512 image. Using synthetic studies of 1,000 duplicate images, the interquartile range for 256 × 256 images was [2.30, 2.36] s and [5.94, 6.05] s for 512 × 512 images. Transmission of clinical MRI studies between the DICOM routers averaged 9.35 images per minute, representing an effective channel utilization of ~137% of the 384-kbit/s satellite service as computed using uncompressed image file sizes (including the effects of image compression, protocol overhead, channel latency, etc.). Power unreliability was the primary cause of interrupted operations in the first year, including an outage exceeding 10 days.

  17. Similarity estimation for reference image retrieval in mammograms using convolutional neural network

    NASA Astrophysics Data System (ADS)

    Muramatsu, Chisako; Higuchi, Shunichi; Morita, Takako; Oiwa, Mikinao; Fujita, Hiroshi

    2018-02-01

    Periodic breast cancer screening with mammography is considered effective in decreasing breast cancer mortality. For screening programs to be successful, an intelligent image analytic system may support radiologists' efficient image interpretation. In our previous studies, we have investigated image retrieval schemes for diagnostic references of breast lesions on mammograms and ultrasound images. Using a machine learning method, reliable similarity measures that agree with radiologists' similarity were determined and relevant images could be retrieved. However, our previous method includes a feature extraction step, in which hand crafted features were determined based on manual outlines of the masses. Obtaining the manual outlines of masses is not practical in clinical practice and such data would be operator-dependent. In this study, we investigated a similarity estimation scheme using a convolutional neural network (CNN) to skip such procedure and to determine data-driven similarity scores. By using CNN as feature extractor, in which extracted features were employed in determination of similarity measures with a conventional 3-layered neural network, the determined similarity measures were correlated well with the subjective ratings and the precision of retrieving diagnostically relevant images was comparable with that of the conventional method using handcrafted features. By using CNN for determination of similarity measure directly, the result was also comparable. By optimizing the network parameters, results may be further improved. The proposed method has a potential usefulness in determination of similarity measure without precise lesion outlines for retrieval of similar mass images on mammograms.

  18. Prototype development and implementation of picture archiving and communications systems based on ISO-OSI standard

    NASA Astrophysics Data System (ADS)

    Martinez, Ralph; Nam, Jiseung

    1992-07-01

    Picture Archiving and Communication Systems (PACS) is an integration of digital image formation in a hospital, which encompasses various imaging equipment, image viewing workstations, image databases, and a high speed network. The integration requires a standardization of communication protocols to connect devices from different vendors. The American College of Radiology and the National Electrical Manufacturers Association (ACR- NEMA) standard Version 2.0 provides a point-to-point hardware interface, a set of software commands, and a consistent set of data formats for PACS. But, it is inadequate for PACS networking environments, because of its point-to-point nature and its inflexibility to allow other services and protocols in the future. Based on previous experience of PACS developments in The University of Arizona, a new communication protocol for PACS networks and an approach were proposed to ACR-NEMA Working Group VI. The defined PACS protocol is intended to facilitate the development of PACS''s capable of interfacing with other hospital information systems. Also, it is intended to allow the creation of diagnostic information data bases which can be interrogated by a variety of distributed devices. A particularly important goal is to support communications in a multivendor environment. The new protocol specifications are defined primarily as a combination of the International Organization for Standardization/Open Systems Interconnection (ISO/OSI), TCP/IP protocols, and the data format portion of ACR-NEMA standard. This paper addresses the specification and implementation of the ISO-based protocol into a PACS prototype. The protocol specification, which covers Presentation, Session, Transport, and Network layers, is summarized briefly. The protocol implementation is discussed based on our implementation efforts in the UNIX Operating System Environment. At the same time, results of performance comparison between the ISO and TCP/IP implementations are presented to demonstrate the implementation of defined protocol. The testing of performance analysis is done by prototyping PACS on available platforms, which are Micro VAX II, DECstation and SUN Workstation.

  19. An ultra-wideband microwave tomography system: preliminary results.

    PubMed

    Gilmore, Colin; Mojabi, Puyan; Zakaria, Amer; Ostadrahimi, Majid; Kaye, Cam; Noghanian, Sima; Shafai, Lotfollah; Pistorius, Stephen; LoVetri, Joe

    2009-01-01

    We describe a 2D wide-band multi-frequency microwave imaging system intended for biomedical imaging. The system is capable of collecting data from 2-10 GHz, with 24 antenna elements connected to a vector network analyzer via a 2 x 24 port matrix switch. Through the use of two different nonlinear reconstruction schemes: the Multiplicative-Regularized Contrast Source Inversion method and an enhanced version of the Distorted Born Iterative Method, we show preliminary imaging results from dielectric phantoms where data were collected from 3-6 GHz. The early inversion results show that the system is capable of quantitatively reconstructing dielectric objects.

  20. Methodology for designing and implementing a class for service for the transmission of medical images over a common network

    NASA Astrophysics Data System (ADS)

    Dimond, David A.; Burgess, Robert; Barrios, Nolan; Johnson, Neil D.

    2000-05-01

    Traditionally, to guarantee the network performance of medical image data transmission, imaging traffic was isolated on a separate network. Organizations are depending on a new generation of multi-purpose networks to transport both normal information and image traffic as they expand access to images throughout the enterprise. These organi want to leverage their existing infrastructure for imaging traffic, but are not willing to accept degradations in overall network performance. To guarantee 'on demand' network performance for image transmissions anywhere at any time, networks need to be designed with the ability to 'carve out' bandwidth for specific applications and to minimize the chances of network failures. This paper will present the methodology Cincinnati Children's Hospital Medical Center (CHMC) used to enhance the physical and logical network design of the existing hospital network to guarantee a class of service for imaging traffic. PACS network designs should utilize the existing enterprise local area network i.e. (LAN) infrastructure where appropriate. Logical separation or segmentation provides the application independence from other clinical and administrative applications as required, ensuring bandwidth and service availability.

  1. Color image definition evaluation method based on deep learning method

    NASA Astrophysics Data System (ADS)

    Liu, Di; Li, YingChun

    2018-01-01

    In order to evaluate different blurring levels of color image and improve the method of image definition evaluation, this paper proposed a method based on the depth learning framework and BP neural network classification model, and presents a non-reference color image clarity evaluation method. Firstly, using VGG16 net as the feature extractor to extract 4,096 dimensions features of the images, then the extracted features and labeled images are employed in BP neural network to train. And finally achieve the color image definition evaluation. The method in this paper are experimented by using images from the CSIQ database. The images are blurred at different levels. There are 4,000 images after the processing. Dividing the 4,000 images into three categories, each category represents a blur level. 300 out of 400 high-dimensional features are trained in VGG16 net and BP neural network, and the rest of 100 samples are tested. The experimental results show that the method can take full advantage of the learning and characterization capability of deep learning. Referring to the current shortcomings of the major existing image clarity evaluation methods, which manually design and extract features. The method in this paper can extract the images features automatically, and has got excellent image quality classification accuracy for the test data set. The accuracy rate is 96%. Moreover, the predicted quality levels of original color images are similar to the perception of the human visual system.

  2. High resolution light-sheet based high-throughput imaging cytometry system enables visualization of intra-cellular organelles

    NASA Astrophysics Data System (ADS)

    Regmi, Raju; Mohan, Kavya; Mondal, Partha Pratim

    2014-09-01

    Visualization of intracellular organelles is achieved using a newly developed high throughput imaging cytometry system. This system interrogates the microfluidic channel using a sheet of light rather than the existing point-based scanning techniques. The advantages of the developed system are many, including, single-shot scanning of specimens flowing through the microfluidic channel at flow rate ranging from micro- to nano- lit./min. Moreover, this opens-up in-vivo imaging of sub-cellular structures and simultaneous cell counting in an imaging cytometry system. We recorded a maximum count of 2400 cells/min at a flow-rate of 700 nl/min, and simultaneous visualization of fluorescently-labeled mitochondrial network in HeLa cells during flow. The developed imaging cytometry system may find immediate application in biotechnology, fluorescence microscopy and nano-medicine.

  3. Medical applications for high-performance computers in SKIF-GRID network.

    PubMed

    Zhuchkov, Alexey; Tverdokhlebov, Nikolay

    2009-01-01

    The paper presents a set of software services for massive mammography image processing by using high-performance parallel computers of SKIF-family which are linked into a service-oriented grid-network. An experience of a prototype system implementation in two medical institutions is also described.

  4. Image Quality Assessment of JPEG Compressed Mars Science Laboratory Mastcam Images using Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Kerner, H. R.; Bell, J. F., III; Ben Amor, H.

    2017-12-01

    The Mastcam color imaging system on the Mars Science Laboratory Curiosity rover acquires images within Gale crater for a variety of geologic and atmospheric studies. Images are often JPEG compressed before being downlinked to Earth. While critical for transmitting images on a low-bandwidth connection, this compression can result in image artifacts most noticeable as anomalous brightness or color changes within or near JPEG compression block boundaries. In images with significant high-frequency detail (e.g., in regions showing fine layering or lamination in sedimentary rocks), the image might need to be re-transmitted losslessly to enable accurate scientific interpretation of the data. The process of identifying which images have been adversely affected by compression artifacts is performed manually by the Mastcam science team, costing significant expert human time. To streamline the tedious process of identifying which images might need to be re-transmitted, we present an input-efficient neural network solution for predicting the perceived quality of a compressed Mastcam image. Most neural network solutions require large amounts of hand-labeled training data for the model to learn the target mapping between input (e.g. distorted images) and output (e.g. quality assessment). We propose an automatic labeling method using joint entropy between a compressed and uncompressed image to avoid the need for domain experts to label thousands of training examples by hand. We use automatically labeled data to train a convolutional neural network to estimate the probability that a Mastcam user would find the quality of a given compressed image acceptable for science analysis. We tested our model on a variety of Mastcam images and found that the proposed method correlates well with image quality perception by science team members. When assisted by our proposed method, we estimate that a Mastcam investigator could reduce the time spent reviewing images by a minimum of 70%.

  5. Transversal homoclinic orbits in a transiently chaotic neural network.

    PubMed

    Chen, Shyan-Shiou; Shih, Chih-Wen

    2002-09-01

    We study the existence of snap-back repellers, hence the existence of transversal homoclinic orbits in a discrete-time neural network. Chaotic behaviors for the network system in the sense of Li and Yorke or Marotto can then be concluded. The result is established by analyzing the structures of the system and allocating suitable parameters in constructing the fixed points and their pre-images for the system. The investigation provides a theoretical confirmation on the scenario of transient chaos for the system. All the parameter conditions for the theory can be examined numerically. The numerical ranges for the parameters which yield chaotic dynamics and convergent dynamics provide significant information in the annealing process in solving combinatorial optimization problems using this transiently chaotic neural network. (c) 2002 American Institute of Physics.

  6. Neural networks for computer-aided diagnosis: detection of lung nodules in chest radiograms.

    PubMed

    Coppini, Giuseppe; Diciotti, Stefano; Falchini, Massimo; Villari, Natale; Valli, Guido

    2003-12-01

    The paper describes a neural-network-based system for the computer aided detection of lung nodules in chest radiograms. Our approach is based on multiscale processing and artificial neural networks (ANNs). The problem of nodule detection is faced by using a two-stage architecture including: 1) an attention focusing subsystem that processes whole radiographs to locate possible nodular regions ensuring high sensitivity; 2) a validation subsystem that processes regions of interest to evaluate the likelihood of the presence of a nodule, so as to reduce false alarms and increase detection specificity. Biologically inspired filters (both LoG and Gabor kernels) are used to enhance salient image features. ANNs of the feedforward type are employed, which allow an efficient use of a priori knowledge about the shape of nodules, and the background structure. The images from the public JSRT database, including 247 radiograms, were used to build and test the system. We performed a further test by using a second private database with 65 radiograms collected and annotated at the Radiology Department of the University of Florence. Both data sets include nodule and nonnodule radiographs. The use of a public data set along with independent testing with a different image set makes the comparison with other systems easier and allows a deeper understanding of system behavior. Experimental results are described by ROC/FROC analysis. For the JSRT database, we observed that by varying sensitivity from 60 to 75% the number of false alarms per image lies in the range 4-10, while accuracy is in the range 95.7-98.0%. When the second data set was used comparable results were obtained. The observed system performances support the undertaking of system validation in clinical settings.

  7. Dual-modality endoscopic probe for tissue surface shape reconstruction and hyperspectral imaging enabled by deep neural networks.

    PubMed

    Lin, Jianyu; Clancy, Neil T; Qi, Ji; Hu, Yang; Tatla, Taran; Stoyanov, Danail; Maier-Hein, Lena; Elson, Daniel S

    2018-06-15

    Surgical guidance and decision making could be improved with accurate and real-time measurement of intra-operative data including shape and spectral information of the tissue surface. In this work, a dual-modality endoscopic system has been proposed to enable tissue surface shape reconstruction and hyperspectral imaging (HSI). This system centers around a probe comprised of an incoherent fiber bundle, whose fiber arrangement is different at the two ends, and miniature imaging optics. For 3D reconstruction with structured light (SL), a light pattern formed of randomly distributed spots with different colors is projected onto the tissue surface, creating artificial texture. Pattern decoding with a Convolutional Neural Network (CNN) model and a customized feature descriptor enables real-time 3D surface reconstruction at approximately 12 frames per second (FPS). In HSI mode, spatially sparse hyperspectral signals from the tissue surface can be captured with a slit hyperspectral imager in a single snapshot. A CNN based super-resolution model, namely "super-spectral-resolution" network (SSRNet), has also been developed to estimate pixel-level dense hypercubes from the endoscope cameras standard RGB images and the sparse hyperspectral signals, at approximately 2 FPS. The probe, with a 2.1 mm diameter, enables the system to be used with endoscope working channels. Furthermore, since data acquisition in both modes can be accomplished in one snapshot, operation of this system in clinical applications is minimally affected by tissue surface movement and deformation. The whole apparatus has been validated on phantoms and tissue (ex vivo and in vivo), while initial measurements on patients during laryngeal surgery show its potential in real-world clinical applications. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Fault tolerant high-performance PACS network design and implementation

    NASA Astrophysics Data System (ADS)

    Chimiak, William J.; Boehme, Johannes M.

    1998-07-01

    The Wake Forest University School of Medicine and the Wake Forest University/Baptist Medical Center (WFUBMC) are implementing a second generation PACS. The first generation PACS provided helpful information about the functional and temporal requirements of the system. It highlighted the importance of image retrieval speed, system availability, RIS/HIS integration, the ability to rapidly view images on any PACS workstation, network bandwidth, equipment redundancy, and the ability for the system to evolve using standards-based components. This paper deals with the network design and implementation of the PACS. The physical layout of the hospital areas served by the PACS, the choice of network equipment and installation issues encountered are addressed. Efforts to optimize fault tolerance are discussed. The PACS network is a gigabit, mixed-media network based on LAN emulation over ATM (LANE) with a rapid migration from LANE to Multiple Protocols Over ATM (MPOA) planned. Two fault-tolerant backbone ATM switches serve to distribute network accesses with two load-balancing 622 megabit per second (Mbps) OC-12 interconnections. The switch was sized to be upgradable to provide a 2.54 Gbps OC-48 interconnection with an OC-12 interconnection as a load-balancing backup. Modalities connect with legacy network interface cards to a switched-ethernet device. This device has two 155 Mbps OC-3 load-balancing uplinks to each of the backbone ATM switches of the PACS. This provides a fault-tolerant logical connection to the modality servers which pass verified DICOM images to the PACS servers and proper PACS diagnostic workstations. Where fiber pulls were prohibitively expensive, edge ATM switches were installed with an OC-12 uplink to a backbone ATM switches. The PACS and data base servers are fault-tolerant, hot-swappable Sun Enterprise Servers with an OC-12 connection to a backbone ATM switch and a fast-ethernet connection to a back-up network. The workstations come with 10/100 BASET autosense cards. A redundant switched-ethernet network will be installed to provide yet another degree of network fault-tolerance. The switched-ethernet devices are connected to each of the backbone ATM switches with two-load-balancing OC-3 connections to provide fault-tolerant connectivity in the event of a primary network failure.

  9. Real-time image processing for particle tracking velocimetry

    NASA Astrophysics Data System (ADS)

    Kreizer, Mark; Ratner, David; Liberzon, Alex

    2010-01-01

    We present a novel high-speed particle tracking velocimetry (PTV) experimental system. Its novelty is due to the FPGA-based, real-time image processing "on camera". Instead of an image, the camera transfers to the computer using a network card, only the relevant information of the identified flow tracers. Therefore, the system is ideal for the remote particle tracking systems in research and industrial applications, while the camera can be controlled and data can be transferred over any high-bandwidth network. We present the hardware and the open source software aspects of the PTV experiments. The tracking results of the new experimental system has been compared to the flow visualization and particle image velocimetry measurements. The canonical flow in the central cross section of a a cubic cavity (1:1:1 aspect ratio) in our lid-driven cavity apparatus is used for validation purposes. The downstream secondary eddy (DSE) is the sensitive portion of this flow and its size was measured with increasing Reynolds number (via increasing belt velocity). The size of DSE estimated from the flow visualization, PIV and compressed PTV is shown to agree within the experimental uncertainty of the methods applied.

  10. Radiation dose reduction in digital breast tomosynthesis (DBT) by means of deep-learning-based supervised image processing

    NASA Astrophysics Data System (ADS)

    Liu, Junchi; Zarshenas, Amin; Qadir, Ammar; Wei, Zheng; Yang, Limin; Fajardo, Laurie; Suzuki, Kenji

    2018-03-01

    To reduce cumulative radiation exposure and lifetime risks for radiation-induced cancer from breast cancer screening, we developed a deep-learning-based supervised image-processing technique called neural network convolution (NNC) for radiation dose reduction in DBT. NNC employed patched-based neural network regression in a convolutional manner to convert lower-dose (LD) to higher-dose (HD) tomosynthesis images. We trained our NNC with quarter-dose (25% of the standard dose: 12 mAs at 32 kVp) raw projection images and corresponding "teaching" higher-dose (HD) images (200% of the standard dose: 99 mAs at 32 kVp) of a breast cadaver phantom acquired with a DBT system (Selenia Dimensions, Hologic, CA). Once trained, NNC no longer requires HD images. It converts new LD images to images that look like HD images; thus the term "virtual" HD (VHD) images. We reconstructed tomosynthesis slices on a research DBT system. To determine a dose reduction rate, we acquired 4 studies of another test phantom at 4 different radiation doses (1.35, 2.7, 4.04, and 5.39 mGy entrance dose). Structural SIMilarity (SSIM) index was used to evaluate the image quality. For testing, we collected half-dose (50% of the standard dose: 32+/-14 mAs at 33+/-5 kVp) and full-dose (standard dose: 68+/-23 mAs at 33+/-5 kvp) images of 10 clinical cases with the DBT system at University of Iowa Hospitals and Clinics. NNC converted half-dose DBT images of 10 clinical cases to VHD DBT images that were equivalent to full dose DBT images. Our cadaver phantom experiment demonstrated 79% dose reduction.

  11. Squeezeposenet: Image Based Pose Regression with Small Convolutional Neural Networks for Real Time Uas Navigation

    NASA Astrophysics Data System (ADS)

    Müller, M. S.; Urban, S.; Jutzi, B.

    2017-08-01

    The number of unmanned aerial vehicles (UAVs) is increasing since low-cost airborne systems are available for a wide range of users. The outdoor navigation of such vehicles is mostly based on global navigation satellite system (GNSS) methods to gain the vehicles trajectory. The drawback of satellite-based navigation are failures caused by occlusions and multi-path interferences. Beside this, local image-based solutions like Simultaneous Localization and Mapping (SLAM) and Visual Odometry (VO) can e.g. be used to support the GNSS solution by closing trajectory gaps but are computationally expensive. However, if the trajectory estimation is interrupted or not available a re-localization is mandatory. In this paper we will provide a novel method for a GNSS-free and fast image-based pose regression in a known area by utilizing a small convolutional neural network (CNN). With on-board processing in mind, we employ a lightweight CNN called SqueezeNet and use transfer learning to adapt the network to pose regression. Our experiments show promising results for GNSS-free and fast localization.

  12. Towards an integrated defense system for cyber security situation awareness experiment

    NASA Astrophysics Data System (ADS)

    Zhang, Hanlin; Wei, Sixiao; Ge, Linqiang; Shen, Dan; Yu, Wei; Blasch, Erik P.; Pham, Khanh D.; Chen, Genshe

    2015-05-01

    In this paper, an implemented defense system is demonstrated to carry out cyber security situation awareness. The developed system consists of distributed passive and active network sensors designed to effectively capture suspicious information associated with cyber threats, effective detection schemes to accurately distinguish attacks, and network actors to rapidly mitigate attacks. Based on the collected data from network sensors, image-based and signals-based detection schemes are implemented to detect attacks. To further mitigate attacks, deployed dynamic firewalls on hosts dynamically update detection information reported from the detection schemes and block attacks. The experimental results show the effectiveness of the proposed system. A future plan to design an effective defense system is also discussed based on system theory.

  13. [Research and implementation of the TLS network transport security technology based on DICOM standard].

    PubMed

    Lu, Xiaoqi; Wang, Lei; Zhao, Jianfeng

    2012-02-01

    With the development of medical information, Picture Archiving and Communications System (PACS), Hospital Information System/Radiology Information System(HIS/RIS) and other medical information management system become popular and developed, and interoperability between these systems becomes more frequent. So, these enclosed systems will be open and regionalized by means of network, and this is inevitable. If the trend becomes true, the security of information transmission may be the first problem to be solved. Based on the need for network security, we investigated the Digital Imaging and Communications in Medicine (DICOM) Standard and Transport Layer Security (TLS) Protocol, and implemented the TLS transmission of the DICOM medical information with OpenSSL toolkit and DCMTK toolkit.

  14. Human fatigue expression recognition through image-based dynamic multi-information and bimodal deep learning

    NASA Astrophysics Data System (ADS)

    Zhao, Lei; Wang, Zengcai; Wang, Xiaojin; Qi, Yazhou; Liu, Qing; Zhang, Guoxin

    2016-09-01

    Human fatigue is an important cause of traffic accidents. To improve the safety of transportation, we propose, in this paper, a framework for fatigue expression recognition using image-based facial dynamic multi-information and a bimodal deep neural network. First, the landmark of face region and the texture of eye region, which complement each other in fatigue expression recognition, are extracted from facial image sequences captured by a single camera. Then, two stacked autoencoder neural networks are trained for landmark and texture, respectively. Finally, the two trained neural networks are combined by learning a joint layer on top of them to construct a bimodal deep neural network. The model can be used to extract a unified representation that fuses landmark and texture modalities together and classify fatigue expressions accurately. The proposed system is tested on a human fatigue dataset obtained from an actual driving environment. The experimental results demonstrate that the proposed method performs stably and robustly, and that the average accuracy achieves 96.2%.

  15. A revised limbic system model for memory, emotion and behaviour.

    PubMed

    Catani, Marco; Dell'acqua, Flavio; Thiebaut de Schotten, Michel

    2013-09-01

    Emotion, memories and behaviour emerge from the coordinated activities of regions connected by the limbic system. Here, we propose an update of the limbic model based on the seminal work of Papez, Yakovlev and MacLean. In the revised model we identify three distinct but partially overlapping networks: (i) the Hippocampal-diencephalic and parahippocampal-retrosplenial network dedicated to memory and spatial orientation; (ii) The temporo-amygdala-orbitofrontal network for the integration of visceral sensation and emotion with semantic memory and behaviour; (iii) the default-mode network involved in autobiographical memories and introspective self-directed thinking. The three networks share cortical nodes that are emerging as principal hubs in connectomic analysis. This revised network model of the limbic system reconciles recent functional imaging findings with anatomical accounts of clinical disorders commonly associated with limbic pathology. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Enterprise-class Digital Imaging and Communications in Medicine (DICOM) image infrastructure.

    PubMed

    York, G; Wortmann, J; Atanasiu, R

    2001-06-01

    Most current picture archiving and communication systems (PACS) are designed for a single department or a single modality. Few PACS installations have been deployed that support the needs of the hospital or the entire Integrated Delivery Network (IDN). The authors propose a new image management architecture that can support a large, distributed enterprise.

  17. Intelligent MRTD testing for thermal imaging system using ANN

    NASA Astrophysics Data System (ADS)

    Sun, Junyue; Ma, Dongmei

    2006-01-01

    The Minimum Resolvable Temperature Difference (MRTD) is the most widely accepted figure for describing the performance of a thermal imaging system. Many models have been proposed to predict it. The MRTD testing is a psychophysical task, for which biases are unavoidable. It requires laboratory conditions such as normal air condition and a constant temperature. It also needs expensive measuring equipments and takes a considerable period of time. Especially when measuring imagers of the same type, the test is time consuming. So an automated and intelligent measurement method should be discussed. This paper adopts the concept of automated MRTD testing using boundary contour system and fuzzy ARTMAP, but uses different methods. It describes an Automated MRTD Testing procedure basing on Back-Propagation Network. Firstly, we use frame grabber to capture the 4-bar target image data. Then according to image gray scale, we segment the image to get 4-bar place and extract feature vector representing the image characteristic and human detection ability. These feature sets, along with known target visibility, are used to train the ANN (Artificial Neural Networks). Actually it is a nonlinear classification (of input dimensions) of the image series using ANN. Our task is to justify if image is resolvable or uncertainty. Then the trained ANN will emulate observer performance in determining MRTD. This method can reduce the uncertainties between observers and long time dependent factors by standardization. This paper will introduce the feature extraction algorithm, demonstrate the feasibility of the whole process and give the accuracy of MRTD measurement.

  18. Altered structural connectivity of pain-related brain network in burning mouth syndrome-investigation by graph analysis of probabilistic tractography.

    PubMed

    Wada, Akihiko; Shizukuishi, Takashi; Kikuta, Junko; Yamada, Haruyasu; Watanabe, Yusuke; Imamura, Yoshiki; Shinozaki, Takahiro; Dezawa, Ko; Haradome, Hiroki; Abe, Osamu

    2017-05-01

    Burning mouth syndrome (BMS) is a chronic intraoral pain syndrome featuring idiopathic oral pain and burning discomfort despite clinically normal oral mucosa. The etiology of chronic pain syndrome is unclear, but preliminary neuroimaging research has suggested the alteration of volume, metabolism, blood flow, and diffusion at multiple brain regions. According to the neuromatrix theory of Melzack, pain sense is generated in the brain by the network of multiple pain-related brain regions. Therefore, the alteration of pain-related network is also assumed as an etiology of chronic pain. In this study, we investigated the brain network of BMS brain by using probabilistic tractography and graph analysis. Fourteen BMS patients and 14 age-matched healthy controls underwent 1.5T MRI. Structural connectivity was calculated in 83 anatomically defined regions with probabilistic tractography of 60-axis diffusion tensor imaging and 3D T1-weighted imaging. Graph theory network analysis was used to evaluate the brain network at local and global connectivity. In BMS brain, a significant difference of local brain connectivity was recognized at the bilateral rostral anterior cingulate cortex, right medial orbitofrontal cortex, and left pars orbitalis which belong to the medial pain system; however, no significant difference was recognized at the lateral system including the somatic sensory cortex. A strengthened connection of the anterior cingulate cortex and medial prefrontal cortex with the basal ganglia, thalamus, and brain stem was revealed. Structural brain network analysis revealed the alteration of the medial system of the pain-related brain network in chronic pain syndrome.

  19. Thermal/vacuum measurements of the Herschel space telescope by close-range photogrammetry

    NASA Astrophysics Data System (ADS)

    Parian, J. Amiri; Cozzani, A.; Appolloni, M.; Casarosa, G.

    2017-11-01

    In the frame of the development of a videogrammetric system to be used in thermal vacuum chambers at the European Space Research and Technology Centre (ESTEC) and other sites across Europe, the design of a network using micro-cameras was specified by the European Space agency (ESA)-ESTEC. The selected test set-up is the photogrammetric test of the Herschel Satellite Flight Model in the ESTEC Large Space Simulator. The photogrammetric system will be used to verify the Herschel Telescope alignment and Telescope positioning with respect to the Cryostat Vacuum Vessel (CVV) inside the Large Space Simulator during Thermal-Vacuum/Thermal-Balance test phases. We designed a close-range photogrammetric network by heuristic simulation and a videogrammetric system with an overall accuracy of 1:100,000. A semi-automated image acquisition system, which is able to work at low temperatures (-170°C) in order to acquire images according to the designed network has been constructed by ESA-ESTEC. In this paper we will present the videogrammetric system and sub-systems and the results of real measurements with a representative setup similar to the set-up of Herschel spacecraft which was realized in ESTEC Test Centre.

  20. A low-cost photoacoustic microscopy system with a laser diode excitation

    PubMed Central

    Wang, Tianheng; Nandy, Sreyankar; Salehi, Hassan S.; Kumavor, Patrick D.; Zhu, Quing

    2014-01-01

    Photoacoustic microscopy (PAM) is capable of mapping microvasculature networks in biological tissue and has demonstrated great potential for biomedical applications. However, the clinical application of the PAM system is limited due to the use of bulky and expensive pulsed laser sources. In this paper, a low-cost optical-resolution PAM system with a pulsed laser diode excitation has been introduced. The lateral resolution of this PAM system was estimated to be 7 µm by imaging a carbon fiber. The phantoms made of polyethylene tubes filled with blood and a mouse ear were imaged to demonstrate the feasibility of this PAM system for imaging biological tissues. PMID:25401019

  1. A feed-forward Hopfield neural network algorithm (FHNNA) with a colour satellite image for water quality mapping

    NASA Astrophysics Data System (ADS)

    Asal Kzar, Ahmed; Mat Jafri, M. Z.; Hwee San, Lim; Al-Zuky, Ali A.; Mutter, Kussay N.; Hassan Al-Saleh, Anwar

    2016-06-01

    There are many techniques that have been given for water quality problem, but the remote sensing techniques have proven their success, especially when the artificial neural networks are used as mathematical models with these techniques. Hopfield neural network is one type of artificial neural networks which is common, fast, simple, and efficient, but it when it deals with images that have more than two colours such as remote sensing images. This work has attempted to solve this problem via modifying the network that deals with colour remote sensing images for water quality mapping. A Feed-forward Hopfield Neural Network Algorithm (FHNNA) was modified and used with a satellite colour image from type of Thailand earth observation system (THEOS) for TSS mapping in the Penang strait, Malaysia, through the classification of TSS concentrations. The new algorithm is based essentially on three modifications: using HNN as feed-forward network, considering the weights of bitplanes, and non-self-architecture or zero diagonal of weight matrix, in addition, it depends on a validation data. The achieved map was colour-coded for visual interpretation. The efficiency of the new algorithm has found out by the higher correlation coefficient (R=0.979) and the lower root mean square error (RMSE=4.301) between the validation data that were divided into two groups. One used for the algorithm and the other used for validating the results. The comparison was with the minimum distance classifier. Therefore, TSS mapping of polluted water in Penang strait, Malaysia, can be performed using FHNNA with remote sensing technique (THEOS). It is a new and useful application of HNN, so it is a new model with remote sensing techniques for water quality mapping which is considered important environmental problem.

  2. Sensor fusion V; Proceedings of the Meeting, Boston, MA, Nov. 15-17, 1992

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1992-01-01

    Topics addressed include 3D object perception, human-machine interface in multisensor systems, sensor fusion architecture, fusion of multiple and distributed sensors, interface and decision models for sensor fusion, computational networks, simple sensing for complex action, multisensor-based control, and metrology and calibration of multisensor systems. Particular attention is given to controlling 3D objects by sketching 2D views, the graphical simulation and animation environment for flexible structure robots, designing robotic systems from sensorimotor modules, cylindrical object reconstruction from a sequence of images, an accurate estimation of surface properties by integrating information using Bayesian networks, an adaptive fusion model for a distributed detection system, multiple concurrent object descriptions in support of autonomous navigation, robot control with multiple sensors and heuristic knowledge, and optical array detectors for image sensors calibration. (No individual items are abstracted in this volume)

  3. Study on an agricultural environment monitoring server system using Wireless Sensor Networks.

    PubMed

    Hwang, Jeonghwan; Shin, Changsun; Yoe, Hyun

    2010-01-01

    This paper proposes an agricultural environment monitoring server system for monitoring information concerning an outdoors agricultural production environment utilizing Wireless Sensor Network (WSN) technology. The proposed agricultural environment monitoring server system collects environmental and soil information on the outdoors through WSN-based environmental and soil sensors, collects image information through CCTVs, and collects location information using GPS modules. This collected information is converted into a database through the agricultural environment monitoring server consisting of a sensor manager, which manages information collected from the WSN sensors, an image information manager, which manages image information collected from CCTVs, and a GPS manager, which processes location information of the agricultural environment monitoring server system, and provides it to producers. In addition, a solar cell-based power supply is implemented for the server system so that it could be used in agricultural environments with insufficient power infrastructure. This agricultural environment monitoring server system could even monitor the environmental information on the outdoors remotely, and it could be expected that the use of such a system could contribute to increasing crop yields and improving quality in the agricultural field by supporting the decision making of crop producers through analysis of the collected information.

  4. Teleradiology network to improve patient care in a peacekeeping military operation

    NASA Astrophysics Data System (ADS)

    Cleary, Kevin R.; Levine, Betty A.; Norton, Gary S.; Mun, Seong K.; Cramer, Timothy J.; de Treville, Robert E.

    1997-05-01

    The Imaging Science and Information Systems (ISIS) Center of the Department of Radiology at Georgetown University Medical Center recently collaborated with the US Army in developing an off-the-shelf teleradiology network for Operation Joint Endeavor, the peace-keeping mission in Bosnia-Herzegovina. The network is part of Operation Primetime III, a project to deploy advanced communications and medical equipment to provide state-of-the-art medical care to the 20,000 US troops stationed there. The network encompasses three major sites: the 212th Mobile Army Surgical Hospital (MASH) near Tuzla, Bosnia-Herzegovina; the 67th Combat Support Hospital (CSH) in Taszar, Hungary; and the Landstuhl Regional Medical Center (LRMC) in Landstuhl, Germany. Planning for the project began in January 1996, and all three sites were operational by April 1996. Since the system was deployed, computed radiography (CR) has been sued almost exclusively at the MASH and CSH for all general x-ray exams. From mid- May to September 1996, over 2700 CR images were acquired at the MASH and over 1600 at the CSH. Since there was not a radiologist a the MASH, the images were transferred to the CSH for primary diagnosis and archiving. In the same time period, over 550 patient folders were sent from the MASH to the CSH.

  5. Cardio-PACs: a new opportunity

    NASA Astrophysics Data System (ADS)

    Heupler, Frederick A., Jr.; Thomas, James D.; Blume, Hartwig R.; Cecil, Robert A.; Heisler, Mary

    2000-05-01

    It is now possible to replace film-based image management in the cardiac catheterization laboratory with a Cardiology Picture Archiving and Communication System (Cardio-PACS) based on digital imaging technology. The first step in the conversion process is installation of a digital image acquisition system that is capable of generating high-quality DICOM-compatible images. The next three steps, which are the subject of this presentation, involve image display, distribution, and storage. Clinical requirements and associated cost considerations for these three steps are listed below: Image display: (1) Image quality equal to film, with DICOM format, lossless compression, image processing, desktop PC-based with color monitor, and physician-friendly imaging software; (2) Performance specifications include: acquire 30 frames/sec; replay 15 frames/sec; access to file server 5 seconds, and to archive 5 minutes; (3) Compatibility of image file, transmission, and processing formats; (4) Image manipulation: brightness, contrast, gray scale, zoom, biplane display, and quantification; (5) User-friendly control of image review. Image distribution: (1) Standard IP-based network between cardiac catheterization laboratories, file server, long-term archive, review stations, and remote sites; (2) Non-proprietary formats; (3) Bidirectional distribution. Image storage: (1) CD-ROM vs disk vs tape; (2) Verification of data integrity; (3) User-designated storage capacity for catheterization laboratory, file server, long-term archive. Costs: (1) Image acquisition equipment, file server, long-term archive; (2) Network infrastructure; (3) Review stations and software; (4) Maintenance and administration; (5) Future upgrades and expansion; (6) Personnel.

  6. Holography and optical information processing; Proceedings of the Soviet-Chinese Joint Seminar, Bishkek, Kyrgyzstan, Sept. 21-26, 1991

    NASA Astrophysics Data System (ADS)

    Mikaelian, Andrei L.

    Attention is given to data storage, devices, architectures, and implementations of optical memory and neural networks; holographic optical elements and computer-generated holograms; holographic display and materials; systems, pattern recognition, interferometry, and applications in optical information processing; and special measurements and devices. Topics discussed include optical immersion as a new way to increase information recording density, systems for data reading from optical disks on the basis of diffractive lenses, a new real-time optical associative memory system, an optical pattern recognition system based on a WTA model of neural networks, phase diffraction grating for the integral transforms of coherent light fields, holographic recording with operated sensitivity and stability in chalcogenide glass layers, a compact optical logic processor, a hybrid optical system for computing invariant moments of images, optical fiber holographic inteferometry, and image transmission through random media in single pass via optical phase conjugation.

  7. Analysis of sensor network observations during some simulated landslide experiments

    NASA Astrophysics Data System (ADS)

    Scaioni, M.; Lu, P.; Feng, T.; Chen, W.; Wu, H.; Qiao, G.; Liu, C.; Tong, X.; Li, R.

    2012-12-01

    A multi-sensor network was tested during some experiments on a landslide simulation platform established at Tongji University (Shanghai, P.R. China). Here landslides were triggered by means of artificial rainfall (see Figure 1). The sensor network currently incorporates contact sensors and two imaging systems. This represent a novel solution, because the spatial sensor network incorporate either contact sensors and remote sensors (video-cameras). In future, these sensors will be installed on two real ground slopes in Sichuan province (South-West China), where Wenchuan earthquake occurred in 2008. This earthquake caused the immediate activation of several landslide, while other area became unstable and still are a menace for people and properties. The platform incorporates the reconstructed scale slope, sensor network, communication system, database and visualization system. Some landslide simulation experiments allowed ascertaining which sensors could be more suitable to be deployed in Wenchuan area. The poster will focus on the analysis of results coming from down scale simulations. Here the different steps of the landslide evolution can be followed on the basis of sensor observations. This include underground sensors to detect the water table level and the pressure in the ground, a set of accelerometers and two inclinometers. In the first part of the analysis the full data series are investigated to look for correlations and common patterns, as well as to link them to the physical processes. In the second, 4 subsets of sensors located in neighbor positions are analyzed. The analysis of low- and high-speed image sequences allowed to track a dense field of displacement on the slope surface. These outcomes have been compared to the ones obtained from accelerometers for cross-validation. Images were also used for the photogrammetric reconstruction of the slope topography during the experiment. Consequently, volume computation and mass movements could be evaluated on the basis of processed images.; Figure 1 - The landslide simulation platform at Tongji University at the end of an experiment. The picture shows the body of simulated landslide.

  8. Retinal angiography with real-time speckle variance optical coherence tomography.

    PubMed

    Xu, Jing; Han, Sherry; Balaratnasingam, Chandrakumar; Mammo, Zaid; Wong, Kevin S K; Lee, Sieun; Cua, Michelle; Young, Mei; Kirker, Andrew; Albiani, David; Forooghian, Farzin; Mackenzie, Paul; Merkur, Andrew; Yu, Dao-Yi; Sarunic, Marinko V

    2015-10-01

    This report describes a novel, non-invasive and label-free optical imaging technique, speckle variance optical coherence tomography (svOCT), for visualising blood flow within human retinal capillary networks. This imaging system uses a custom-built swept source OCT system operating at a line rate of 100 kHz. Real-time processing and visualisation is implemented on a consumer grade graphics processing unit. To investigate the quality of microvascular detail acquired with this device we compared images of human capillary networks acquired with svOCT and fluorescein angiography. We found that the density of capillary microvasculature acquired with this svOCT device was visibly greater than fluorescein angiography. We also found that this svOCT device had the capacity to generate en face images of distinct capillary networks that are morphologically comparable with previously published histological studies. Finally, we found that this svOCT device has the ability to non-invasively illustrate the common manifestations of diabetic retinopathy and retinal vascular occlusion. The results of this study suggest that graphics processing unit accelerated svOCT has the potential to non-invasively provide useful quantitative information about human retinal capillary networks. Therefore svOCT may have clinical and research applications for the management of retinal microvascular diseases, which are a major cause of visual morbidity worldwide. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  9. Computer vision in roadway transportation systems: a survey

    NASA Astrophysics Data System (ADS)

    Loce, Robert P.; Bernal, Edgar A.; Wu, Wencheng; Bala, Raja

    2013-10-01

    There is a worldwide effort to apply 21st century intelligence to evolving our transportation networks. The goals of smart transportation networks are quite noble and manifold, including safety, efficiency, law enforcement, energy conservation, and emission reduction. Computer vision is playing a key role in this transportation evolution. Video imaging scientists are providing intelligent sensing and processing technologies for a wide variety of applications and services. There are many interesting technical challenges including imaging under a variety of environmental and illumination conditions, data overload, recognition and tracking of objects at high speed, distributed network sensing and processing, energy sources, as well as legal concerns. This paper presents a survey of computer vision techniques related to three key problems in the transportation domain: safety, efficiency, and security and law enforcement. A broad review of the literature is complemented by detailed treatment of a few selected algorithms and systems that the authors believe represent the state-of-the-art.

  10. Tooth labeling in cone-beam CT using deep convolutional neural network for forensic identification

    NASA Astrophysics Data System (ADS)

    Miki, Yuma; Muramatsu, Chisako; Hayashi, Tatsuro; Zhou, Xiangrong; Hara, Takeshi; Katsumata, Akitoshi; Fujita, Hiroshi

    2017-03-01

    In large disasters, dental record plays an important role in forensic identification. However, filing dental charts for corpses is not an easy task for general dentists. Moreover, it is laborious and time-consuming work in cases of large scale disasters. We have been investigating a tooth labeling method on dental cone-beam CT images for the purpose of automatic filing of dental charts. In our method, individual tooth in CT images are detected and classified into seven tooth types using deep convolutional neural network. We employed the fully convolutional network using AlexNet architecture for detecting each tooth and applied our previous method using regular AlexNet for classifying the detected teeth into 7 tooth types. From 52 CT volumes obtained by two imaging systems, five images each were randomly selected as test data, and the remaining 42 cases were used as training data. The result showed the tooth detection accuracy of 77.4% with the average false detection of 5.8 per image. The result indicates the potential utility of the proposed method for automatic recording of dental information.

  11. Searching for patterns in remote sensing image databases using neural networks

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1995-01-01

    We have investigated a method, based on a successful neural network multispectral image classification system, of searching for single patterns in remote sensing databases. While defining the pattern to search for and the feature to be used for that search (spectral, spatial, temporal, etc.) is challenging, a more difficult task is selecting competing patterns to train against the desired pattern. Schemes for competing pattern selection, including random selection and human interpreted selection, are discussed in the context of an example detection of dense urban areas in Landsat Thematic Mapper imagery. When applying the search to multiple images, a simple normalization method can alleviate the problem of inconsistent image calibration. Another potential problem, that of highly compressed data, was found to have a minimal effect on the ability to detect the desired pattern. The neural network algorithm has been implemented using the PVM (Parallel Virtual Machine) library and nearly-optimal speedups have been obtained that help alleviate the long process of searching through imagery.

  12. Tele-counseling and social-skill trainings using JGNII optical network and a mirror-interface system

    NASA Astrophysics Data System (ADS)

    Hashimoto, Sayuri; Hashimoto, Nobuyuki; Onozawa, Akira; Hosoya, Eiichi; Harada, Ikuo; Okunaka, Junzo

    2007-09-01

    "Tele-presence" communication using JGNII - an exclusive optical-fiber network system - was applied to social-skills training in the form of child-rearing support. This application focuses on internet counseling and social training skills that require interactive verbal and none-verbal communications. The motivation for this application is supporting local communities by constructing tele-presence education and entertainment systems using recently available, inexpensive IP networks. This latest application of tele-presence communication uses mirror-interface system which provides to users in remote locations a shared quasi-space where they can see themselves as if they were in the same room by overlapping video images from remote locations.

  13. Design of an image-distribution service from a clinical PACS

    NASA Astrophysics Data System (ADS)

    Gehring, Dale G.; Persons, Kenneth R.; Rothman, Melvyn L.; Felmlee, Joel P.; Gerhart, D. J.; Hangiandreou, Nicholas J.; Reardon, Frank J.; Shirk, M.; Forbes, Glenn S.; Williamson, Byrn, Jr.

    1994-05-01

    A PACS system has been developed through a multi-phase collaboration between the Mayo Clinic and IBM/Rochester. The current system has been fully integrated into the clinical practice of the Radiology Department for the primary purpose of digital image archival, retrieval, and networked workstation review. Work currently in progress includes the design and implementation of a gateway device for providing digital image data to third-party workstations, laser printers, and other devices, for users both within and outside of the Radiology Department.

  14. An integrated service digital network (ISDN)-based international telecommunication between Samsung Medical Center and Hokkaido University using telecommunication helped radiotherapy planning and information system (THERAPIS).

    PubMed

    Huh, S J; Shirato, H; Hashimoto, S; Shimizu, S; Kim, D Y; Ahn, Y C; Choi, D; Miyasaka, K; Mizuno, J

    2000-07-01

    This study introduces the integrated service digital network (ISDN)-based international teleradiotherapy system (THERAPIS) in radiation oncology between hospitals in Seoul, South Korea and in Sapporo, Japan. THERAPIS has the following functions: (1) exchange of patient's image data, (2) real-time teleconference, and (3) communication of the treatment planning, dose calculation and distribution, and of portal verification images between the remote hospitals. Our preliminary results of applications on eight patients demonstrated that the international telecommunication using THERAPIS was clinically useful and satisfactory with sufficient bandwidth for the transfer of patient data for clinical use in radiation oncology.

  15. Advanced road scene image segmentation and pavement evaluation using neural networks.

    DOT National Transportation Integrated Search

    2010-01-01

    The current project, funded by MIOH-UTC for the period 9/1/2009-8/31/2010, continues our : efforts in designing an image processing based pavement inspection system for the : assessment of highway surface conditions. One of the most important tasks i...

  16. Performance enhancement of a web-based picture archiving and communication system using commercial off-the-shelf server clusters.

    PubMed

    Liu, Yan-Lin; Shih, Cheng-Ting; Chang, Yuan-Jen; Chang, Shu-Jun; Wu, Jay

    2014-01-01

    The rapid development of picture archiving and communication systems (PACSs) thoroughly changes the way of medical informatics communication and management. However, as the scale of a hospital's operations increases, the large amount of digital images transferred in the network inevitably decreases system efficiency. In this study, a server cluster consisting of two server nodes was constructed. Network load balancing (NLB), distributed file system (DFS), and structured query language (SQL) duplication services were installed. A total of 1 to 16 workstations were used to transfer computed radiography (CR), computed tomography (CT), and magnetic resonance (MR) images simultaneously to simulate the clinical situation. The average transmission rate (ATR) was analyzed between the cluster and noncluster servers. In the download scenario, the ATRs of CR, CT, and MR images increased by 44.3%, 56.6%, and 100.9%, respectively, when using the server cluster, whereas the ATRs increased by 23.0%, 39.2%, and 24.9% in the upload scenario. In the mix scenario, the transmission performance increased by 45.2% when using eight computer units. The fault tolerance mechanisms of the server cluster maintained the system availability and image integrity. The server cluster can improve the transmission efficiency while maintaining high reliability and continuous availability in a healthcare environment.

  17. Performance Enhancement of a Web-Based Picture Archiving and Communication System Using Commercial Off-the-Shelf Server Clusters

    PubMed Central

    Chang, Shu-Jun; Wu, Jay

    2014-01-01

    The rapid development of picture archiving and communication systems (PACSs) thoroughly changes the way of medical informatics communication and management. However, as the scale of a hospital's operations increases, the large amount of digital images transferred in the network inevitably decreases system efficiency. In this study, a server cluster consisting of two server nodes was constructed. Network load balancing (NLB), distributed file system (DFS), and structured query language (SQL) duplication services were installed. A total of 1 to 16 workstations were used to transfer computed radiography (CR), computed tomography (CT), and magnetic resonance (MR) images simultaneously to simulate the clinical situation. The average transmission rate (ATR) was analyzed between the cluster and noncluster servers. In the download scenario, the ATRs of CR, CT, and MR images increased by 44.3%, 56.6%, and 100.9%, respectively, when using the server cluster, whereas the ATRs increased by 23.0%, 39.2%, and 24.9% in the upload scenario. In the mix scenario, the transmission performance increased by 45.2% when using eight computer units. The fault tolerance mechanisms of the server cluster maintained the system availability and image integrity. The server cluster can improve the transmission efficiency while maintaining high reliability and continuous availability in a healthcare environment. PMID:24701580

  18. Deep feature learning for knee cartilage segmentation using a triplanar convolutional neural network.

    PubMed

    Prasoon, Adhish; Petersen, Kersten; Igel, Christian; Lauze, François; Dam, Erik; Nielsen, Mads

    2013-01-01

    Segmentation of anatomical structures in medical images is often based on a voxel/pixel classification approach. Deep learning systems, such as convolutional neural networks (CNNs), can infer a hierarchical representation of images that fosters categorization. We propose a novel system for voxel classification integrating three 2D CNNs, which have a one-to-one association with the xy, yz and zx planes of 3D image, respectively. We applied our method to the segmentation of tibial cartilage in low field knee MRI scans and tested it on 114 unseen scans. Although our method uses only 2D features at a single scale, it performs better than a state-of-the-art method using 3D multi-scale features. In the latter approach, the features and the classifier have been carefully adapted to the problem at hand. That we were able to get better results by a deep learning architecture that autonomously learns the features from the images is the main insight of this study.

  19. Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor.

    PubMed

    Kim, Dong Seop; Arsalan, Muhammad; Park, Kang Ryoung

    2018-03-23

    Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR) light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR) open database, show that our method outperforms previous works.

  20. Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor

    PubMed Central

    Kim, Dong Seop; Arsalan, Muhammad; Park, Kang Ryoung

    2018-01-01

    Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR) light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR) open database, show that our method outperforms previous works. PMID:29570690

  1. On detection and visualization techniques for cyber security situation awareness

    NASA Astrophysics Data System (ADS)

    Yu, Wei; Wei, Shixiao; Shen, Dan; Blowers, Misty; Blasch, Erik P.; Pham, Khanh D.; Chen, Genshe; Zhang, Hanlin; Lu, Chao

    2013-05-01

    Networking technologies are exponentially increasing to meet worldwide communication requirements. The rapid growth of network technologies and perversity of communications pose serious security issues. In this paper, we aim to developing an integrated network defense system with situation awareness capabilities to present the useful information for human analysts. In particular, we implement a prototypical system that includes both the distributed passive and active network sensors and traffic visualization features, such as 1D, 2D and 3D based network traffic displays. To effectively detect attacks, we also implement algorithms to transform real-world data of IP addresses into images and study the pattern of attacks and use both the discrete wavelet transform (DWT) based scheme and the statistical based scheme to detect attacks. Through an extensive simulation study, our data validate the effectiveness of our implemented defense system.

  2. Imaging structural and functional brain networks in temporal lobe epilepsy.

    PubMed

    Bernhardt, Boris C; Hong, Seokjun; Bernasconi, Andrea; Bernasconi, Neda

    2013-10-01

    Early imaging studies in temporal lobe epilepsy (TLE) focused on the search for mesial temporal sclerosis, as its surgical removal results in clinically meaningful improvement in about 70% of patients. Nevertheless, a considerable subgroup of patients continues to suffer from post-operative seizures. Although the reasons for surgical failure are not fully understood, electrophysiological and imaging data suggest that anomalies extending beyond the temporal lobe may have negative impact on outcome. This hypothesis has revived the concept of human epilepsy as a disorder of distributed brain networks. Recent methodological advances in non-invasive neuroimaging have led to quantify structural and functional networks in vivo. While structural networks can be inferred from diffusion MRI tractography and inter-regional covariance patterns of structural measures such as cortical thickness, functional connectivity is generally computed based on statistical dependencies of neurophysiological time-series, measured through functional MRI or electroencephalographic techniques. This review considers the application of advanced analytical methods in structural and functional connectivity analyses in TLE. We will specifically highlight findings from graph-theoretical analysis that allow assessing the topological organization of brain networks. These studies have provided compelling evidence that TLE is a system disorder with profound alterations in local and distributed networks. In addition, there is emerging evidence for the utility of network properties as clinical diagnostic markers. Nowadays, a network perspective is considered to be essential to the understanding of the development, progression, and management of epilepsy.

  3. A networked modular hardware and software system for MRI-guided robotic prostate interventions

    NASA Astrophysics Data System (ADS)

    Su, Hao; Shang, Weijian; Harrington, Kevin; Camilo, Alex; Cole, Gregory; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare; Fischer, Gregory S.

    2012-02-01

    Magnetic resonance imaging (MRI) provides high resolution multi-parametric imaging, large soft tissue contrast, and interactive image updates making it an ideal modality for diagnosing prostate cancer and guiding surgical tools. Despite a substantial armamentarium of apparatuses and systems has been developed to assist surgical diagnosis and therapy for MRI-guided procedures over last decade, the unified method to develop high fidelity robotic systems in terms of accuracy, dynamic performance, size, robustness and modularity, to work inside close-bore MRI scanner still remains a challenge. In this work, we develop and evaluate an integrated modular hardware and software system to support the surgical workflow of intra-operative MRI, with percutaneous prostate intervention as an illustrative case. Specifically, the distinct apparatuses and methods include: 1) a robot controller system for precision closed loop control of piezoelectric motors, 2) a robot control interface software that connects the 3D Slicer navigation software and the robot controller to exchange robot commands and coordinates using the OpenIGTLink open network communication protocol, and 3) MRI scan plane alignment to the planned path and imaging of the needle as it is inserted into the target location. A preliminary experiment with ex-vivo phantom validates the system workflow, MRI-compatibility and shows that the robotic system has a better than 0.01mm positioning accuracy.

  4. Fiber Optic Communication System For Medical Images

    NASA Astrophysics Data System (ADS)

    Arenson, Ronald L.; Morton, Dan E.; London, Jack W.

    1982-01-01

    This paper discusses a fiber optic communication system linking ultrasound devices, Computerized tomography scanners, Nuclear Medicine computer system, and a digital fluoro-graphic system to a central radiology research computer. These centrally archived images are available for near instantaneous recall at various display consoles. When a suitable laser optical disk is available for mass storage, more extensive image archiving will be added to the network including digitized images of standard radiographs for comparison purposes and for remote display in such areas as the intensive care units, the operating room, and selected outpatient departments. This fiber optic system allows for a transfer of high resolution images in less than a second over distances exceeding 2,000 feet. The advantages of using fiber optic cables instead of typical parallel or serial communication techniques will be described. The switching methodology and communication protocols will also be discussed.

  5. A mathematical model of neuro-fuzzy approximation in image classification

    NASA Astrophysics Data System (ADS)

    Gopalan, Sasi; Pinto, Linu; Sheela, C.; Arun Kumar M., N.

    2016-06-01

    Image digitization and explosion of World Wide Web has made traditional search for image, an inefficient method for retrieval of required grassland image data from large database. For a given input query image Content-Based Image Retrieval (CBIR) system retrieves the similar images from a large database. Advances in technology has increased the use of grassland image data in diverse areas such has agriculture, art galleries, education, industry etc. In all the above mentioned diverse areas it is necessary to retrieve grassland image data efficiently from a large database to perform an assigned task and to make a suitable decision. A CBIR system based on grassland image properties and it uses the aid of a feed-forward back propagation neural network for an effective image retrieval is proposed in this paper. Fuzzy Memberships plays an important role in the input space of the proposed system which leads to a combined neural fuzzy approximation in image classification. The CBIR system with mathematical model in the proposed work gives more clarity about fuzzy-neuro approximation and the convergence of the image features in a grassland image.

  6. Image understanding and the man-machine interface II; Proceedings of the Meeting, Los Angeles, CA, Jan. 17, 18, 1989

    NASA Technical Reports Server (NTRS)

    Barrett, Eamon B. (Editor); Pearson, James J. (Editor)

    1989-01-01

    Image understanding concepts and models, image understanding systems and applications, advanced digital processors and software tools, and advanced man-machine interfaces are among the topics discussed. Particular papers are presented on such topics as neural networks for computer vision, object-based segmentation and color recognition in multispectral images, the application of image algebra to image measurement and feature extraction, and the integration of modeling and graphics to create an infrared signal processing test bed.

  7. An efficient and secure partial image encryption for wireless multimedia sensor networks using discrete wavelet transform, chaotic maps and substitution box

    NASA Astrophysics Data System (ADS)

    Khan, Muazzam A.; Ahmad, Jawad; Javaid, Qaisar; Saqib, Nazar A.

    2017-03-01

    Wireless Sensor Networks (WSN) is widely deployed in monitoring of some physical activity and/or environmental conditions. Data gathered from WSN is transmitted via network to a central location for further processing. Numerous applications of WSN can be found in smart homes, intelligent buildings, health care, energy efficient smart grids and industrial control systems. In recent years, computer scientists has focused towards findings more applications of WSN in multimedia technologies, i.e. audio, video and digital images. Due to bulky nature of multimedia data, WSN process a large volume of multimedia data which significantly increases computational complexity and hence reduces battery time. With respect to battery life constraints, image compression in addition with secure transmission over a wide ranged sensor network is an emerging and challenging task in Wireless Multimedia Sensor Networks. Due to the open nature of the Internet, transmission of data must be secure through a process known as encryption. As a result, there is an intensive demand for such schemes that is energy efficient as well as highly secure since decades. In this paper, discrete wavelet-based partial image encryption scheme using hashing algorithm, chaotic maps and Hussain's S-Box is reported. The plaintext image is compressed via discrete wavelet transform and then the image is shuffled column-wise and row wise-wise via Piece-wise Linear Chaotic Map (PWLCM) and Nonlinear Chaotic Algorithm, respectively. To get higher security, initial conditions for PWLCM are made dependent on hash function. The permuted image is bitwise XORed with random matrix generated from Intertwining Logistic map. To enhance the security further, final ciphertext is obtained after substituting all elements with Hussain's substitution box. Experimental and statistical results confirm the strength of the anticipated scheme.

  8. Advanced Secure Optical Image Processing for Communications

    NASA Astrophysics Data System (ADS)

    Al Falou, Ayman

    2018-04-01

    New image processing tools and data-processing network systems have considerably increased the volume of transmitted information such as 2D and 3D images with high resolution. Thus, more complex networks and long processing times become necessary, and high image quality and transmission speeds are requested for an increasing number of applications. To satisfy these two requests, several either numerical or optical solutions were offered separately. This book explores both alternatives and describes research works that are converging towards optical/numerical hybrid solutions for high volume signal and image processing and transmission. Without being limited to hybrid approaches, the latter are particularly investigated in this book in the purpose of combining the advantages of both techniques. Additionally, pure numerical or optical solutions are also considered since they emphasize the advantages of one of the two approaches separately.

  9. Low-complex energy-aware image communication in visual sensor networks

    NASA Astrophysics Data System (ADS)

    Phamila, Yesudhas Asnath Victy; Amutha, Ramachandran

    2013-10-01

    A low-complex, low bit rate, energy-efficient image compression algorithm explicitly designed for resource-constrained visual sensor networks applied for surveillance, battle field, habitat monitoring, etc. is presented, where voluminous amount of image data has to be communicated over a bandwidth-limited wireless medium. The proposed method overcomes the energy limitation of individual nodes and is investigated in terms of image quality, entropy, processing time, overall energy consumption, and system lifetime. This algorithm is highly energy efficient and extremely fast since it applies energy-aware zonal binary discrete cosine transform (DCT) that computes only the few required significant coefficients and codes them using enhanced complementary Golomb Rice code without using any floating point operations. Experiments are performed using the Atmel Atmega128 and MSP430 processors to measure the resultant energy savings. Simulation results show that the proposed energy-aware fast zonal transform consumes only 0.3% of energy needed by conventional DCT. This algorithm consumes only 6% of energy needed by Independent JPEG Group (fast) version, and it suits for embedded systems requiring low power consumption. The proposed scheme is unique since it significantly enhances the lifetime of the camera sensor node and the network without any need for distributed processing as was traditionally required in existing algorithms.

  10. Modeling fluctuations in default-mode brain network using a spiking neural network.

    PubMed

    Yamanishi, Teruya; Liu, Jian-Qin; Nishimura, Haruhiko

    2012-08-01

    Recently, numerous attempts have been made to understand the dynamic behavior of complex brain systems using neural network models. The fluctuations in blood-oxygen-level-dependent (BOLD) brain signals at less than 0.1 Hz have been observed by functional magnetic resonance imaging (fMRI) for subjects in a resting state. This phenomenon is referred to as a "default-mode brain network." In this study, we model the default-mode brain network by functionally connecting neural communities composed of spiking neurons in a complex network. Through computational simulations of the model, including transmission delays and complex connectivity, the network dynamics of the neural system and its behavior are discussed. The results show that the power spectrum of the modeled fluctuations in the neuron firing patterns is consistent with the default-mode brain network's BOLD signals when transmission delays, a characteristic property of the brain, have finite values in a given range.

  11. Cellular neural network-based hybrid approach toward automatic image registration

    NASA Astrophysics Data System (ADS)

    Arun, Pattathal VijayaKumar; Katiyar, Sunil Kumar

    2013-01-01

    Image registration is a key component of various image processing operations that involve the analysis of different image data sets. Automatic image registration domains have witnessed the application of many intelligent methodologies over the past decade; however, inability to properly model object shape as well as contextual information has limited the attainable accuracy. A framework for accurate feature shape modeling and adaptive resampling using advanced techniques such as vector machines, cellular neural network (CNN), scale invariant feature transform (SIFT), coreset, and cellular automata is proposed. CNN has been found to be effective in improving feature matching as well as resampling stages of registration and complexity of the approach has been considerably reduced using coreset optimization. The salient features of this work are cellular neural network approach-based SIFT feature point optimization, adaptive resampling, and intelligent object modelling. Developed methodology has been compared with contemporary methods using different statistical measures. Investigations over various satellite images revealed that considerable success was achieved with the approach. This system has dynamically used spectral and spatial information for representing contextual knowledge using CNN-prolog approach. This methodology is also illustrated to be effective in providing intelligent interpretation and adaptive resampling.

  12. Multispectral Image Processing for Plants

    NASA Technical Reports Server (NTRS)

    Miles, Gaines E.

    1991-01-01

    The development of a machine vision system to monitor plant growth and health is one of three essential steps towards establishing an intelligent system capable of accurately assessing the state of a controlled ecological life support system for long-term space travel. Besides a network of sensors, simulators are needed to predict plant features, and artificial intelligence algorithms are needed to determine the state of a plant based life support system. Multispectral machine vision and image processing can be used to sense plant features, including health and nutritional status.

  13. Using parallel evolutionary development for a biologically-inspired computer vision system for mobile robots.

    PubMed

    Wright, Cameron H G; Barrett, Steven F; Pack, Daniel J

    2005-01-01

    We describe a new approach to attacking the problem of robust computer vision for mobile robots. The overall strategy is to mimic the biological evolution of animal vision systems. Our basic imaging sensor is based upon the eye of the common house fly, Musca domestica. The computational algorithms are a mix of traditional image processing, subspace techniques, and multilayer neural networks.

  14. Research on image retrieval using deep convolutional neural network combining L1 regularization and PRelu activation function

    NASA Astrophysics Data System (ADS)

    QingJie, Wei; WenBin, Wang

    2017-06-01

    In this paper, the image retrieval using deep convolutional neural network combined with regularization and PRelu activation function is studied, and improves image retrieval accuracy. Deep convolutional neural network can not only simulate the process of human brain to receive and transmit information, but also contains a convolution operation, which is very suitable for processing images. Using deep convolutional neural network is better than direct extraction of image visual features for image retrieval. However, the structure of deep convolutional neural network is complex, and it is easy to over-fitting and reduces the accuracy of image retrieval. In this paper, we combine L1 regularization and PRelu activation function to construct a deep convolutional neural network to prevent over-fitting of the network and improve the accuracy of image retrieval

  15. Functional neuroanatomical networks associated with expertise in motor imagery.

    PubMed

    Guillot, Aymeric; Collet, Christian; Nguyen, Vo An; Malouin, Francine; Richards, Carol; Doyon, Julien

    2008-07-15

    Although numerous behavioural studies provide evidence that there exist wide differences within individual motor imagery (MI) abilities, little is known with regards to the functional neuroanatomical networks that dissociate someone with good versus poor MI capacities. For the first time, we thus compared, through functional magnetic resonance imaging (fMRI), the pattern of cerebral activations in 13 skilled and 15 unskilled imagers during both physical execution and MI of a sequence of finger movements. Differences in MI abilities were assessed using well-established questionnaire and chronometric measures, as well as a new index based upon the subject's peripheral responses from the autonomic nervous system. As expected, both good and poor imagers activated the inferior and superior parietal lobules, as well as motor-related regions including the lateral and medial premotor cortex, the cerebellum and putamen. Inter-group comparisons revealed that good imagers activated more the parietal and ventrolateral premotor regions, which are known to play a critical role in the generation of mental images. By contrast, poor imagers recruited the cerebellum, orbito-frontal and posterior cingulate cortices. Consistent with findings from the motor sequence learning literature and Doyon and Ungerleider's model of motor learning [Doyon, J., Ungerleider, L.G., 2002. Functional anatomy of motor skill learning. In: Squire, L.R., Schacter, D.L. (Eds.), Neuropsychology of memory, Guilford Press, pp. 225-238], our results demonstrate that compared to skilled imagers, poor imagers not only need to recruit the cortico-striatal system, but to compensate with the cortico-cerebellar system during MI of sequential movements.

  16. Feedforward Categorization on AER Motion Events Using Cortex-Like Features in a Spiking Neural Network.

    PubMed

    Zhao, Bo; Ding, Ruoxi; Chen, Shoushun; Linares-Barranco, Bernabe; Tang, Huajin

    2015-09-01

    This paper introduces an event-driven feedforward categorization system, which takes data from a temporal contrast address event representation (AER) sensor. The proposed system extracts bio-inspired cortex-like features and discriminates different patterns using an AER based tempotron classifier (a network of leaky integrate-and-fire spiking neurons). One of the system's most appealing characteristics is its event-driven processing, with both input and features taking the form of address events (spikes). The system was evaluated on an AER posture dataset and compared with two recently developed bio-inspired models. Experimental results have shown that it consumes much less simulation time while still maintaining comparable performance. In addition, experiments on the Mixed National Institute of Standards and Technology (MNIST) image dataset have demonstrated that the proposed system can work not only on raw AER data but also on images (with a preprocessing step to convert images into AER events) and that it can maintain competitive accuracy even when noise is added. The system was further evaluated on the MNIST dynamic vision sensor dataset (in which data is recorded using an AER dynamic vision sensor), with testing accuracy of 88.14%.

  17. Dynamic Functional Connectivity States Between the Dorsal and Ventral Sensorimotor Networks Revealed by Dynamic Conditional Correlation Analysis of Resting-State Functional Magnetic Resonance Imaging.

    PubMed

    Syed, Maleeha F; Lindquist, Martin A; Pillai, Jay J; Agarwal, Shruti; Gujar, Sachin K; Choe, Ann S; Caffo, Brian; Sair, Haris I

    2017-12-01

    Functional connectivity in resting-state functional magnetic resonance imaging (rs-fMRI) has received substantial attention since the initial findings of Biswal et al. Traditional network correlation metrics assume that the functional connectivity in the brain remains stationary over time. However, recent studies have shown that robust temporal fluctuations of functional connectivity among as well as within functional networks exist, challenging this assumption. In this study, these dynamic correlation differences were investigated between the dorsal and ventral sensorimotor networks by applying the dynamic conditional correlation model to rs-fMRI data of 20 healthy subjects. k-Means clustering was used to determine an optimal number of discrete connectivity states (k = 10) of the sensorimotor system across all subjects. Our analysis confirms the existence of differences in dynamic correlation between the dorsal and ventral networks, with highest connectivity found within the ventral motor network.

  18. iMAGE cloud: medical image processing as a service for regional healthcare in a hybrid cloud environment.

    PubMed

    Liu, Li; Chen, Weiping; Nie, Min; Zhang, Fengjuan; Wang, Yu; He, Ailing; Wang, Xiaonan; Yan, Gen

    2016-11-01

    To handle the emergence of the regional healthcare ecosystem, physicians and surgeons in various departments and healthcare institutions must process medical images securely, conveniently, and efficiently, and must integrate them with electronic medical records (EMRs). In this manuscript, we propose a software as a service (SaaS) cloud called the iMAGE cloud. A three-layer hybrid cloud was created to provide medical image processing services in the smart city of Wuxi, China, in April 2015. In the first step, medical images and EMR data were received and integrated via the hybrid regional healthcare network. Then, traditional and advanced image processing functions were proposed and computed in a unified manner in the high-performance cloud units. Finally, the image processing results were delivered to regional users using the virtual desktop infrastructure (VDI) technology. Security infrastructure was also taken into consideration. Integrated information query and many advanced medical image processing functions-such as coronary extraction, pulmonary reconstruction, vascular extraction, intelligent detection of pulmonary nodules, image fusion, and 3D printing-were available to local physicians and surgeons in various departments and healthcare institutions. Implementation results indicate that the iMAGE cloud can provide convenient, efficient, compatible, and secure medical image processing services in regional healthcare networks. The iMAGE cloud has been proven to be valuable in applications in the regional healthcare system, and it could have a promising future in the healthcare system worldwide.

  19. DANoC: An Efficient Algorithm and Hardware Codesign of Deep Neural Networks on Chip.

    PubMed

    Zhou, Xichuan; Li, Shengli; Tang, Fang; Hu, Shengdong; Lin, Zhi; Zhang, Lei

    2017-07-18

    Deep neural networks (NNs) are the state-of-the-art models for understanding the content of images and videos. However, implementing deep NNs in embedded systems is a challenging task, e.g., a typical deep belief network could exhaust gigabytes of memory and result in bandwidth and computational bottlenecks. To address this challenge, this paper presents an algorithm and hardware codesign for efficient deep neural computation. A hardware-oriented deep learning algorithm, named the deep adaptive network, is proposed to explore the sparsity of neural connections. By adaptively removing the majority of neural connections and robustly representing the reserved connections using binary integers, the proposed algorithm could save up to 99.9% memory utility and computational resources without undermining classification accuracy. An efficient sparse-mapping-memory-based hardware architecture is proposed to fully take advantage of the algorithmic optimization. Different from traditional Von Neumann architecture, the deep-adaptive network on chip (DANoC) brings communication and computation in close proximity to avoid power-hungry parameter transfers between on-board memory and on-chip computational units. Experiments over different image classification benchmarks show that the DANoC system achieves competitively high accuracy and efficiency comparing with the state-of-the-art approaches.

  20. Casimage project: a digital teaching files authoring environment.

    PubMed

    Rosset, Antoine; Muller, Henning; Martins, Martina; Dfouni, Natalia; Vallée, Jean-Paul; Ratib, Osman

    2004-04-01

    The goal of the Casimage project is to offer an authoring and editing environment integrated with the Picture Archiving and Communication Systems (PACS) for creating image-based electronic teaching files. This software is based on a client/server architecture allowing remote access of users to a central database. This authoring environment allows radiologists to create reference databases and collection of digital images for teaching and research directly from clinical cases being reviewed on PACS diagnostic workstations. The environment includes all tools to create teaching files, including textual description, annotations, and image manipulation. The software also allows users to generate stand-alone CD-ROMs and web-based teaching files to easily share their collections. The system includes a web server compatible with the Medical Imaging Resource Center standard (MIRC, http://mirc.rsna.org) to easily integrate collections in the RSNA web network dedicated to teaching files. This software could be installed on any PACS workstation to allow users to add new cases at any time and anywhere during clinical operations. Several images collections were created with this tool, including thoracic imaging that was subsequently made available on a CD-Rom and on our web site and through the MIRC network for public access.

  1. Dynamic tracking of magnetic nanoparticles for mapping microvascular networks using a clinical 1.5 T magnetic resonance scanner

    NASA Astrophysics Data System (ADS)

    Olamaei, Nina; Cheriet, Farida; Deschênes, Sylvain; Martel, Sylvain

    2014-05-01

    Being able to visualize blood vessels with an inner diameter of less than 150 μm is the present limit of modern medical imaging modalities and it becomes an important issue to advance state-of-the-art medical imaging, diagnostics, surgery, and targeted interventions. In cancer therapy, such capability would provide the information required for new delivery methods such as magnetic resonance navigation to navigate therapeutic agents along a planned trajectory deeper in the vasculature and hence closer to the region to be treated for enhancing the therapeutic index. To demonstrate the possibility of gathering images of microvascular networks dynamically and beyond the limitation of medical imaging modalities, the susceptibility artifact was used as the contrast mechanism in magnetic resonance imaging (MRI) to detect magnetic micro-aggregations of iron-oxide nanoparticles (150 ± 20 μm in diameter) as they were injected in a 2D synthetic microvascular network. Magnetic entities cause susceptibility artifacts in the images by disrupting the MRI's homogeneous magnetic field in a much larger scale than their actual size. The position of the artifact reflects the position of the aggregations in the vascular system. The calculated positions of discrete-time scans were extracted and assembled to build up the distribution of the vascular network. The results suggest that this method could be used to gather images of blood vessels beyond the spatial resolution of clinical medical imaging modalities with a measured average error confirmed on a 2D reconstruction of the micro-vessels of approximately half of a pixel's size.

  2. Connectopathy in Autism Spectrum Disorders: A Review of Evidence from Visual Evoked Potentials and Diffusion Magnetic Resonance Imaging

    PubMed Central

    Yamasaki, Takao; Maekawa, Toshihiko; Fujita, Takako; Tobimatsu, Shozo

    2017-01-01

    Individuals with autism spectrum disorder (ASD) show superior performance in processing fine details; however, they often exhibit impairments of gestalt face, global motion perception, and visual attention as well as core social deficits. Increasing evidence has suggested that social deficits in ASD arise from abnormal functional and structural connectivities between and within distributed cortical networks that are recruited during social information processing. Because the human visual system is characterized by a set of parallel, hierarchical, multistage network systems, we hypothesized that the altered connectivity of visual networks contributes to social cognition impairment in ASD. In the present review, we focused on studies of altered connectivity of visual and attention networks in ASD using visual evoked potentials (VEPs), event-related potentials (ERPs), and diffusion tensor imaging (DTI). A series of VEP, ERP, and DTI studies conducted in our laboratory have demonstrated complex alterations (impairment and enhancement) of visual and attention networks in ASD. Recent data have suggested that the atypical visual perception observed in ASD is caused by altered connectivity within parallel visual pathways and attention networks, thereby contributing to the impaired social communication observed in ASD. Therefore, we conclude that the underlying pathophysiological mechanism of ASD constitutes a “connectopathy.” PMID:29170625

  3. Deep Learning in Medical Imaging: General Overview.

    PubMed

    Lee, June-Goo; Jun, Sanghoon; Cho, Young-Won; Lee, Hyunna; Kim, Guk Bae; Seo, Joon Beom; Kim, Namkug

    2017-01-01

    The artificial neural network (ANN)-a machine learning technique inspired by the human neuronal synapse system-was introduced in the 1950s. However, the ANN was previously limited in its ability to solve actual problems, due to the vanishing gradient and overfitting problems with training of deep architecture, lack of computing power, and primarily the absence of sufficient data to train the computer system. Interest in this concept has lately resurfaced, due to the availability of big data, enhanced computing power with the current graphics processing units, and novel algorithms to train the deep neural network. Recent studies on this technology suggest its potentially to perform better than humans in some visual and auditory recognition tasks, which may portend its applications in medicine and healthcare, especially in medical imaging, in the foreseeable future. This review article offers perspectives on the history, development, and applications of deep learning technology, particularly regarding its applications in medical imaging.

  4. Fuzzy-cellular neural network for face recognition HCI Authentication

    NASA Astrophysics Data System (ADS)

    Hoomod, Haider K.; ali, Ahmed abd

    2018-05-01

    Because of the rapid development of mobile devices technology, ease of use and interact with humans. May have found a mobile device most uses in our communications. Mobile devices can carry large amounts of personal and sensitive data, but often left not guaranteed (pin) locks are inconvenient to use and thus have seen low adoption while biometrics is more convenient and less susceptible to fraud and manipulation. Were propose in this paper authentication technique for using a mobile face recognition based on cellular neural networks [1] and fuzzy rules control. The good speed and get recognition rate from applied the proposed system in Android system. The images obtained in real time for 60 persons each person has 20 t0 60 different shot face images (about 3600 images), were the results for (FAR = 0), (FRR = 1.66%), (FER = 1.66) and accuracy = 98.34

  5. BoolFilter: an R package for estimation and identification of partially-observed Boolean dynamical systems.

    PubMed

    Mcclenny, Levi D; Imani, Mahdi; Braga-Neto, Ulisses M

    2017-11-25

    Gene regulatory networks govern the function of key cellular processes, such as control of the cell cycle, response to stress, DNA repair mechanisms, and more. Boolean networks have been used successfully in modeling gene regulatory networks. In the Boolean network model, the transcriptional state of each gene is represented by 0 (inactive) or 1 (active), and the relationship among genes is represented by logical gates updated at discrete time points. However, the Boolean gene states are never observed directly, but only indirectly and incompletely through noisy measurements based on expression technologies such as cDNA microarrays, RNA-Seq, and cell imaging-based assays. The Partially-Observed Boolean Dynamical System (POBDS) signal model is distinct from other deterministic and stochastic Boolean network models in removing the requirement of a directly observable Boolean state vector and allowing uncertainty in the measurement process, addressing the scenario encountered in practice in transcriptomic analysis. BoolFilter is an R package that implements the POBDS model and associated algorithms for state and parameter estimation. It allows the user to estimate the Boolean states, network topology, and measurement parameters from time series of transcriptomic data using exact and approximated (particle) filters, as well as simulate the transcriptomic data for a given Boolean network model. Some of its infrastructure, such as the network interface, is the same as in the previously published R package for Boolean Networks BoolNet, which enhances compatibility and user accessibility to the new package. We introduce the R package BoolFilter for Partially-Observed Boolean Dynamical Systems (POBDS). The BoolFilter package provides a useful toolbox for the bioinformatics community, with state-of-the-art algorithms for simulation of time series transcriptomic data as well as the inverse process of system identification from data obtained with various expression technologies such as cDNA microarrays, RNA-Seq, and cell imaging-based assays.

  6. 3-DIMENSIONAL Optoelectronic

    NASA Astrophysics Data System (ADS)

    Krishnamoorthy, Ashok Venketaraman

    This thesis covers the design, analysis, optimization, and implementation of optoelectronic (N,M,F) networks. (N,M,F) networks are generic space-division networks that are well suited to implementation using optoelectronic integrated circuits and free-space optical interconnects. An (N,M,F) networks consists of N input channels each having a fanout F_{rm o}, M output channels each having a fanin F_{rm i}, and Log_{rm K}(N/F) stages of K x K switches. The functionality of the fanout, switching, and fanin stages depends on the specific application. Three applications of optoelectronic (N,M,F) networks are considered. The first is an optoelectronic (N,1,1) content -addressable memory system that achieves associative recall on two-dimensional images retrieved from a parallel-access optical memory. The design and simulation of the associative memory are discussed, and an experimental emulation of a prototype system using images from a parallel-readout optical disk is presented. The system design provides superior performance to existing electronic content-addressable memory chips in terms of capacity and search rate, and uses readily available optical disk and VLSI technologies. Next, a scalable optoelectronic (N,M,F) neural network that uses free-space holographic optical interconnects is presented. The neural architecture minimizes the number of optical transmitters needed, and provides accurate electronic fanin with low signal skew, and dendritic-type fan-in processing capability in a compact layout. Optimal data-encoding methods and circuit techniques are discussed. The implementation of an prototype optoelectronic neural system, and its application to a simple recognition task is demonstrated. Finally, the design, analysis, and optimization of a (N,N,F) self-routing, packet-switched multistage interconnection network is described. The network is suitable for parallel computing and broadband switching applications. The tradeoff between optical and electronic interconnects is examined quantitatively by varying the electronic switch size K. The performance of the (N,N,F) network versus the fanning parameter F, is also analyzed. It is shown that the optoelectronic (N,N,F) networks provide a range of performance-cost alternatives, and offer superior performance-per-cost to fully electronic switching networks and to previous networks designs.

  7. Deep Convolutional Neural Networks for Classifying Body Constitution Based on Face Image.

    PubMed

    Huan, Er-Yang; Wen, Gui-Hua; Zhang, Shi-Jun; Li, Dan-Yang; Hu, Yang; Chang, Tian-Yuan; Wang, Qing; Huang, Bing-Lin

    2017-01-01

    Body constitution classification is the basis and core content of traditional Chinese medicine constitution research. It is to extract the relevant laws from the complex constitution phenomenon and finally build the constitution classification system. Traditional identification methods have the disadvantages of inefficiency and low accuracy, for instance, questionnaires. This paper proposed a body constitution recognition algorithm based on deep convolutional neural network, which can classify individual constitution types according to face images. The proposed model first uses the convolutional neural network to extract the features of face image and then combines the extracted features with the color features. Finally, the fusion features are input to the Softmax classifier to get the classification result. Different comparison experiments show that the algorithm proposed in this paper can achieve the accuracy of 65.29% about the constitution classification. And its performance was accepted by Chinese medicine practitioners.

  8. Filmless Radiology: The Design, Integration, Implementation, and Evaluation of a Digital Imaging Network. Potential Investigations to Be Conducted in Conjunction with the Digital Imaging Network System (DINS) evaluation Project. Revision 1

    DTIC Science & Technology

    1988-06-01

    INSTRUMENT IDENTIFICATION NUMBERORGANIZATION U.S. Army Medical (if aplicable ) Research & Development Command Contract No. DAMDI7-86-C-6145 8c. ADDRESS (City...three echelon levels is delivered from mobile facilities located up to 10, 40, and 150 kilometers from the forward edge of the battle area (FEBA...respectively. There are seven types of Army hospitals used in wartime theaters of operation. They range from Mobile Army Surgical Hospital (MASH) units in

  9. Image dissemination and archiving.

    PubMed

    Robertson, Ian

    2007-08-01

    Images generated as part of the sonographic examination are an integral part of the medical record and must be retained according to local regulations. The standard medical image format, known as DICOM (Digital Imaging and COmmunications in Medicine) makes it possible for images from many different imaging modalities, including ultrasound, to be distributed via a standard internet network to distant viewing workstations and a central archive in an almost seamless fashion. The DICOM standard is a truly universal standard for the dissemination of medical images. When purchasing an ultrasound unit, the consumer should research the unit's capacity to generate images in a DICOM format, especially if one wishes interconnectivity with viewing workstations and an image archive that stores other medical images. PACS, an acronym for Picture Archive and Communication System refers to the infrastructure that links modalities, workstations, the image archive, and the medical record information system into an integrated system, allowing for efficient electronic distribution and storage of medical images and access to medical record data.

  10. Motion video compression system with neural network having winner-take-all function

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi (Inventor); Sheu, Bing J. (Inventor)

    1997-01-01

    A motion video data system includes a compression system, including an image compressor, an image decompressor correlative to the image compressor having an input connected to an output of the image compressor, a feedback summing node having one input connected to an output of the image decompressor, a picture memory having an input connected to an output of the feedback summing node, apparatus for comparing an image stored in the picture memory with a received input image and deducing therefrom pixels having differences between the stored image and the received image and for retrieving from the picture memory a partial image including the pixels only and applying the partial image to another input of the feedback summing node, whereby to produce at the output of the feedback summing node an updated decompressed image, a subtraction node having one input connected to received the received image and another input connected to receive the partial image so as to generate a difference image, the image compressor having an input connected to receive the difference image whereby to produce a compressed difference image at the output of the image compressor.

  11. Integrating fuzzy object based image analysis and ant colony optimization for road extraction from remotely sensed images

    NASA Astrophysics Data System (ADS)

    Maboudi, Mehdi; Amini, Jalal; Malihi, Shirin; Hahn, Michael

    2018-04-01

    Updated road network as a crucial part of the transportation database plays an important role in various applications. Thus, increasing the automation of the road extraction approaches from remote sensing images has been the subject of extensive research. In this paper, we propose an object based road extraction approach from very high resolution satellite images. Based on the object based image analysis, our approach incorporates various spatial, spectral, and textural objects' descriptors, the capabilities of the fuzzy logic system for handling the uncertainties in road modelling, and the effectiveness and suitability of ant colony algorithm for optimization of network related problems. Four VHR optical satellite images which are acquired by Worldview-2 and IKONOS satellites are used in order to evaluate the proposed approach. Evaluation of the extracted road networks shows that the average completeness, correctness, and quality of the results can reach 89%, 93% and 83% respectively, indicating that the proposed approach is applicable for urban road extraction. We also analyzed the sensitivity of our algorithm to different ant colony optimization parameter values. Comparison of the achieved results with the results of four state-of-the-art algorithms and quantifying the robustness of the fuzzy rule set demonstrate that the proposed approach is both efficient and transferable to other comparable images.

  12. Deep architecture neural network-based real-time image processing for image-guided radiotherapy.

    PubMed

    Mori, Shinichiro

    2017-08-01

    To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  13. Calibration of an Outdoor Distributed Camera Network with a 3D Point Cloud

    PubMed Central

    Ortega, Agustín; Silva, Manuel; Teniente, Ernesto H.; Ferreira, Ricardo; Bernardino, Alexandre; Gaspar, José; Andrade-Cetto, Juan

    2014-01-01

    Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME) patio at the Universitat Politècnica de Catalunya (UPC). PMID:25076221

  14. Calibration of an outdoor distributed camera network with a 3D point cloud.

    PubMed

    Ortega, Agustín; Silva, Manuel; Teniente, Ernesto H; Ferreira, Ricardo; Bernardino, Alexandre; Gaspar, José; Andrade-Cetto, Juan

    2014-07-29

    Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME) patio at the Universitat Politècnica de Catalunya (UPC).

  15. Lag Synchronization of Switched Neural Networks via Neural Activation Function and Applications in Image Encryption.

    PubMed

    Wen, Shiping; Zeng, Zhigang; Huang, Tingwen; Meng, Qinggang; Yao, Wei

    2015-07-01

    This paper investigates the problem of global exponential lag synchronization of a class of switched neural networks with time-varying delays via neural activation function and applications in image encryption. The controller is dependent on the output of the system in the case of packed circuits, since it is hard to measure the inner state of the circuits. Thus, it is critical to design the controller based on the neuron activation function. Comparing the results, in this paper, with the existing ones shows that we improve and generalize the results derived in the previous literature. Several examples are also given to illustrate the effectiveness and potential applications in image encryption.

  16. PAVECHECK : integrating deflection and GPR for network condition surveys.

    DOT National Transportation Integrated Search

    2009-01-01

    The PAVECHECK data integration and analysis system was developed to merge Falling Weight : Deflectometer (FWD) and Ground Penetrating Radar (GPR) data together with digital video images of : surface conditions. In this study Global Positioning System...

  17. Video movie making using remote procedure calls and 4BSD Unix sockets on Unix, UNICOS, and MS-DOS systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, D.W.; Johnston, W.E.; Hall, D.E.

    1990-03-01

    We describe the use of the Sun Remote Procedure Call and Unix socket interprocess communication mechanisms to provide the network transport for a distributed, client-server based, image handling system. Clients run under Unix or UNICOS and servers run under Unix or MS-DOS. The use of remote procedure calls across local or wide-area networks to make video movies is addressed.

  18. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.

    1991-01-01

    Artificial intelligence concepts are applied to robotics. Artificial neural networks, expert systems and laser imaging techniques for autonomous space robots are being studied. A computer graphics laser range finder simulator developed by Wu has been used by Weiland and Norwood to study use of artificial neural networks for path planning and obstacle avoidance. Interest is expressed in applications of CLIPS, NETS, and Fuzzy Control. These applications are applied to robot navigation.

  19. Identification of neuronal network properties from the spectral analysis of calcium imaging signals in neuronal cultures.

    PubMed

    Tibau, Elisenda; Valencia, Miguel; Soriano, Jordi

    2013-01-01

    Neuronal networks in vitro are prominent systems to study the development of connections in living neuronal networks and the interplay between connectivity, activity and function. These cultured networks show a rich spontaneous activity that evolves concurrently with the connectivity of the underlying network. In this work we monitor the development of neuronal cultures, and record their activity using calcium fluorescence imaging. We use spectral analysis to characterize global dynamical and structural traits of the neuronal cultures. We first observe that the power spectrum can be used as a signature of the state of the network, for instance when inhibition is active or silent, as well as a measure of the network's connectivity strength. Second, the power spectrum identifies prominent developmental changes in the network such as GABAA switch. And third, the analysis of the spatial distribution of the spectral density, in experiments with a controlled disintegration of the network through CNQX, an AMPA-glutamate receptor antagonist in excitatory neurons, reveals the existence of communities of strongly connected, highly active neurons that display synchronous oscillations. Our work illustrates the interest of spectral analysis for the study of in vitro networks, and its potential use as a network-state indicator, for instance to compare healthy and diseased neuronal networks.

  20. Medical image analysis with artificial neural networks.

    PubMed

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. Copyright © 2010 Elsevier Ltd. All rights reserved.

  1. Using Satellite Imagery with ET Weather Station Networks to Map Crop Water Use for Irrigation Scheduling: TOPS-SIMS.

    USDA-ARS?s Scientific Manuscript database

    Evapotranspiration estimates for scheduling irrigation must be field specific and real time. Weather station networks provide daily reference ET values, but users need to select crop coefficients for their particular crop and field. A prototype system has been developed that combines satellite image...

  2. Conditional random field modelling of interactions between findings in mammography

    NASA Astrophysics Data System (ADS)

    Kooi, Thijs; Mordang, Jan-Jurre; Karssemeijer, Nico

    2017-03-01

    Recent breakthroughs in training deep neural network architectures, in particular deep Convolutional Neural Networks (CNNs), made a big impact on vision research and are increasingly responsible for advances in Computer Aided Diagnosis (CAD). Since many natural scenes and medical images vary in size and are too large to feed to the networks as a whole, two stage systems are typically employed, where in the first stage, small regions of interest in the image are located and presented to the network as training and test data. These systems allow us to harness accurate region based annotations, making the problem easier to learn. However, information is processed purely locally and context is not taken into account. In this paper, we present preliminary work on the employment of a Conditional Random Field (CRF) that is trained on top the CNN to model contextual interactions such as the presence of other suspicious regions, for mammography CAD. The model can easily be extended to incorporate other sources of information, such as symmetry, temporal change and various patient covariates and is general in the sense that it can have application in other CAD problems.

  3. Development and implementation of ultrasound picture archiving and communication system

    NASA Astrophysics Data System (ADS)

    Weinberg, Wolfram S.; Tessler, Franklin N.; Grant, Edward G.; Kangarloo, Hooshang; Huang, H. K.

    1990-08-01

    The Department of Radiological Sciences at the UCLA School of Medicine is developing an archiving and communication system (PACS) for digitized ultrasound images. In its final stage the system will involve the acquisition and archiving of ultrasound studies from four different locations including the Center for Health Sciences, the Department for Mental Health and the Outpatient Radiology and Endoscopy Departments with a total of 200-250 patient studies per week. The concept comprises two stages of image manipulation for each ultrasound work area. The first station is located close to the examination site and accomodates the acquisition of digital images from up to five ultrasound devices and provides for instantaneous display and primary viewing and image selection. Completed patient studies are transferred to a main workstation for secondary review, further analysis and comparison studies. The review station has an on-line storage capacity of 10,000 images with a resolution of 512x512 8 bit data to allow for immediate retrieval of active patient studies of up to two weeks. The main work stations are connected through the general network and use one central archive for long term storage and a film printer for hardcopy output. First phase development efforts concentrate on the implementation and testing of a system at one location consisting of a number of ultrasound units with video digitizer and network interfaces and a microcomputer workstation as host for the display station with two color monitors, each allowing simultaneous display of four 512x512 images. The discussion emphasizes functionality, performance and acceptance of the system in the clinical environment.

  4. Identification of alterations associated with age in the clustering structure of functional brain networks.

    PubMed

    Guzman, Grover E C; Sato, Joao R; Vidal, Maciel C; Fujita, Andre

    2018-01-01

    Initial studies using resting-state functional magnetic resonance imaging on the trajectories of the brain network from childhood to adulthood found evidence of functional integration and segregation over time. The comprehension of how healthy individuals' functional integration and segregation occur is crucial to enhance our understanding of possible deviations that may lead to brain disorders. Recent approaches have focused on the framework wherein the functional brain network is organized into spatially distributed modules that have been associated with specific cognitive functions. Here, we tested the hypothesis that the clustering structure of brain networks evolves during development. To address this hypothesis, we defined a measure of how well a brain region is clustered (network fitness index), and developed a method to evaluate its association with age. Then, we applied this method to a functional magnetic resonance imaging data set composed of 397 males under 31 years of age collected as part of the Autism Brain Imaging Data Exchange Consortium. As results, we identified two brain regions for which the clustering change over time, namely, the left middle temporal gyrus and the left putamen. Since the network fitness index is associated with both integration and segregation, our finding suggests that the identified brain region plays a role in the development of brain systems.

  5. Quantum Assisted Learning for Registration of MODIS Images

    NASA Astrophysics Data System (ADS)

    Pelissier, C.; Le Moigne, J.; Fekete, G.; Halem, M.

    2017-12-01

    The advent of the first large scale quantum annealer by D-Wave has led to an increased interest in quantum computing. However, the quantum annealing computer of the D-Wave is limited to either solving Quadratic Unconstrained Binary Optimization problems (QUBOs) or using the ground state sampling of an Ising system that can be produced by the D-Wave. These restrictions make it challenging to find algorithms to accelerate the computation of typical Earth Science applications. A major difficulty is that most applications have continuous real-valued parameters rather than binary. Here we present an exploratory study using the ground state sampling to train artificial neural networks (ANNs) to carry out image registration of MODIS images. The key idea to using the D-Wave to train networks is that the quantum chip behaves thermally like Boltzmann machines (BMs), and BMs are known to be successful at recognizing patterns in images. The ground state sampling of the D-Wave also depends on the dynamics of the adiabatic evolution and is subject to other non-thermal fluctuations, but the statistics are thought to be similar and ANNs tend to be robust under fluctuations. In light of this, the D-Wave ground state sampling is used to define a Boltzmann like generative model and is investigated to register MODIS images. Image intensities of MODIS images are transformed using a Discrete Cosine Transform and used to train a several layers network to learn how to align images to a reference image. The network layers consist of an initial sigmoid layer acting as a binary filter of the input followed by a strict binarization using Bernoulli sampling, and then fed into a Boltzmann machine. The output is then classified using a soft-max layer. Results are presented and discussed.

  6. Picture archiving and communication system--Part one: Filmless radiology and distance radiology.

    PubMed

    De Backer, A I; Mortelé, K J; De Keulenaer, B L

    2004-01-01

    Picture archiving and communication system (PACS) is a collection of technologies used to carry out digital medical imaging. PACS is used to digitally acquire medical images from the various modalities, such as computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, and digital projection radiography. The image data and pertinent information are transmitted to other and possibly remote locations over networks, where they may be displayed on computer workstations for soft copy viewing in multiple locations, thus permitting simultaneous consultations and almost instant reporting from radiologists at a distance. Data are secured and archived on digital media such as optical disks or tape, and may be automatically retrieved as necessary. Close integration with the hospital information system (HIS)--radiology information system (RIS) is critical for system functionality. Medical image management systems are maturing, providing access outside of the radiology department to images throughout the hospital via the Ethernet, at different hospitals, or from a home workstation if teleradiology has been implemented.

  7. Status Report on Image Information Systems and Image Data Base Technology

    DTIC Science & Technology

    1989-12-01

    PowerHouse, StarGate , StarNet. Significant Recent Developments: Acceptance major teaching Universities (Australia), U.S.A.F. Major Corporations. Future...scenario, all computers must be VAX). STARBASE StarBase StarNet, (Network server), StarBase StarGate , (SQL gateway). SYBASE Sybase is an inherently

  8. Potential and challenges in use of thermal imaging for humid region irrigation system management

    USDA-ARS?s Scientific Manuscript database

    Thermal imaging has shown potential to assist with many aspects of irrigation management including scheduling water application, detecting leaky irrigation canals, and gauging the overall effectiveness of water distribution networks used in furrow irrigation. Many challenges exist for the use of the...

  9. A Partnership Training Program in Breast Cancer Diagnosis: Concept Development of the Next Generation Diagnostic Breast Imaging Using Digital Image Library and Networking Techniques

    DTIC Science & Technology

    2004-10-01

    This program represents a training partnership between Howard University (HU) (Department of Electrical Engineering, Department of Systems and...from Georgetown and Howard University will participate in training through seminars, specialized tutorials and workshops. Outside distinguished

  10. Probabilistic atlases of default mode, executive control and salience network white matter tracts: an fMRI-guided diffusion tensor imaging and tractography study

    PubMed Central

    Figley, Teresa D.; Bhullar, Navdeep; Courtney, Susan M.; Figley, Chase R.

    2015-01-01

    Diffusion tensor imaging (DTI) is a powerful MRI technique that can be used to estimate both the microstructural integrity and the trajectories of white matter pathways throughout the central nervous system. This fiber tracking (aka, “tractography”) approach is often carried out using anatomically-defined seed points to identify white matter tracts that pass through one or more structures, but can also be performed using functionally-defined regions of interest (ROIs) that have been determined using functional MRI (fMRI) or other methods. In this study, we performed fMRI-guided DTI tractography between all of the previously defined nodes within each of six common resting-state brain networks, including the: dorsal Default Mode Network (dDMN), ventral Default Mode Network (vDMN), left Executive Control Network (lECN), right Executive Control Network (rECN), anterior Salience Network (aSN), and posterior Salience Network (pSN). By normalizing the data from 32 healthy control subjects to a standard template—using high-dimensional, non-linear warping methods—we were able to create probabilistic white matter atlases for each tract in stereotaxic coordinates. By investigating all 198 ROI-to-ROI combinations within the aforementioned resting-state networks (for a total of 6336 independent DTI tractography analyses), the resulting probabilistic atlases represent a comprehensive cohort of functionally-defined white matter regions that can be used in future brain imaging studies to: (1) ascribe DTI or other white matter changes to particular functional brain networks, and (2) compliment resting state fMRI or other functional connectivity analyses. PMID:26578930

  11. The microcomputer workstation - An alternate hardware architecture for remotely sensed image analysis

    NASA Technical Reports Server (NTRS)

    Erickson, W. K.; Hofman, L. B.; Donovan, W. E.

    1984-01-01

    Difficulties regarding the digital image analysis of remotely sensed imagery can arise in connection with the extensive calculations required. In the past, an expensive large to medium mainframe computer system was needed for performing these calculations. For image-processing applications smaller minicomputer-based systems are now used by many organizations. The costs for such systems are still in the range from $100K to $300K. Recently, as a result of new developments, the use of low-cost microcomputers for image processing and display systems appeared to have become feasible. These developments are related to the advent of the 16-bit microprocessor and the concept of the microcomputer workstation. Earlier 8-bit microcomputer-based image processing systems are briefly examined, and a computer workstation architecture is discussed. Attention is given to a microcomputer workstation developed by Stanford University, and the design and implementation of a workstation network.

  12. A Total Information Management System For All Medical Images

    NASA Astrophysics Data System (ADS)

    Ouimette, Donald; Nudelman, Sol; Ramsby, Gale; Spackman, Thomas

    1985-09-01

    A PACS has been designed for the University of Connecticut Health Center to serve all departments acquiring images for diagnosis, surgery and therapy. It incorporates a multiple community communications architecture to provide complete information management for medical images, medical data and departmental administrative matter. The system is modular and expandable. It permits an initial installation for radiology and subsequent expansion to include other departments at the Health Center, beginning with internal medicine, surgery, ophthalmology and dentistry. The design permits sufficient expansion to offer the potential for accepting the additional burden of a hospital information system. Primary parameters that led to this system design were based on the anticipation that departments in time could achieve generating 60 to 90% of their images suited to insertion in a PACS, that a high network throughput for large block image transfers would be essen-tial and that total system reliability was fundamental to success.

  13. Computer systems for annotation of single molecule fragments

    DOEpatents

    Schwartz, David Charles; Severin, Jessica

    2016-07-19

    There are provided computer systems for visualizing and annotating single molecule images. Annotation systems in accordance with this disclosure allow a user to mark and annotate single molecules of interest and their restriction enzyme cut sites thereby determining the restriction fragments of single nucleic acid molecules. The markings and annotations may be automatically generated by the system in certain embodiments and they may be overlaid translucently onto the single molecule images. An image caching system may be implemented in the computer annotation systems to reduce image processing time. The annotation systems include one or more connectors connecting to one or more databases capable of storing single molecule data as well as other biomedical data. Such diverse array of data can be retrieved and used to validate the markings and annotations. The annotation systems may be implemented and deployed over a computer network. They may be ergonomically optimized to facilitate user interactions.

  14. Computer-assisted cervical cancer screening using neural networks.

    PubMed

    Mango, L J

    1994-03-15

    A practical and effective system for the computer-assisted screening of conventionally prepared cervical smears is presented and described. Recent developments in neural network technology have made computerized analysis of the complex cellular scenes found on Pap smears possible. The PAPNET Cytological Screening System uses neural networks to automatically analyze conventional smears by locating and recognizing potentially abnormal cells. It then displays images of these objects for review and final diagnosis by qualified cytologists. The results of the studies presented indicate that the PAPNET system could be a useful tool for both the screening and rescreening of cervical smears. In addition, the system has been shown to be sensitive to some types of abnormalities which have gone undetected during manual screening.

  15. Rapid Object Detection Systems, Utilising Deep Learning and Unmanned Aerial Systems (uas) for Civil Engineering Applications

    NASA Astrophysics Data System (ADS)

    Griffiths, D.; Boehm, J.

    2018-05-01

    With deep learning approaches now out-performing traditional image processing techniques for image understanding, this paper accesses the potential of rapid generation of Convolutional Neural Networks (CNNs) for applied engineering purposes. Three CNNs are trained on 275 UAS-derived and freely available online images for object detection of 3m2 segments of railway track. These includes two models based on the Faster RCNN object detection algorithm (Resnet and Incpetion-Resnet) as well as the novel onestage Focal Loss network architecture (Retinanet). Model performance was assessed with respect to three accuracy metrics. The first two consisted of Intersection over Union (IoU) with thresholds 0.5 and 0.1. The last assesses accuracy based on the proportion of track covered by object detection proposals against total track length. In under six hours of training (and two hours of manual labelling) the models detected 91.3 %, 83.1 % and 75.6 % of track in the 500 test images acquired from the UAS survey Retinanet, Resnet and Inception-Resnet respectively. We then discuss the potential for such applications of such systems within the engineering field for a range of scenarios.

  16. North-South America Network of Magnetically Conjugate All-Sky Imagers

    DTIC Science & Technology

    2015-01-02

    based (radio and optical) systems and via in‐situ instruments onboard rockets and satellites. The ionosphere can be observed by a variety...to study the full system in a comprehensive way. The two locations at the base of a given terrestrial B‐line (one in the northern hemisphere and one...imagers to be established. This was based on the need to study ionospheric disturbances ordered by their geomagnetic

  17. Remotely supported prehospital ultrasound: A feasibility study of real-time image transmission and expert guidance to aid diagnosis in remote and rural communities.

    PubMed

    Eadie, Leila; Mulhern, John; Regan, Luke; Mort, Alasdair; Shannon, Helen; Macaden, Ashish; Wilson, Philip

    2017-01-01

    Introduction Our aim is to expedite prehospital assessment of remote and rural patients using remotely-supported ultrasound and satellite/cellular communications. In this paradigm, paramedics are remotely-supported ultrasound operators, guided by hospital-based specialists, to record images before receiving diagnostic advice. Technology can support users in areas with little access to medical imaging and suboptimal communications coverage by connecting to multiple cellular networks and/or satellites to stream live ultrasound and audio-video. Methods An ambulance-based demonstrator system captured standard trauma and novel transcranial ultrasound scans from 10 healthy volunteers at 16 locations across the Scottish Highlands. Volunteers underwent brief scanning training before receiving expert guidance via the communications link. Ultrasound images were streamed with an audio/video feed to reviewers for interpretation. Two sessions were transmitted via satellite and 21 used cellular networks. Reviewers rated image and communication quality, and their utility for diagnosis. Transmission latency and bandwidth were recorded, and effects of scanner and reviewer experience were assessed. Results Appropriate views were provided in 94% of the simulated trauma scans. The mean upload rate was 835/150 kbps and mean latency was 114/2072 ms for cellular and satellite networks, respectively. Scanning experience had a significant impact on time to achieve a diagnostic image, and review of offline scans required significantly less time than live-streamed scans. Discussion This prehospital ultrasound system could facilitate early diagnosis and streamlining of treatment pathways for remote emergency patients, being particularly applicable in rural areas worldwide with poor communications infrastructure and extensive transport times.

  18. A deep learning approach for real time prostate segmentation in freehand ultrasound guided biopsy.

    PubMed

    Anas, Emran Mohammad Abu; Mousavi, Parvin; Abolmaesumi, Purang

    2018-06-01

    Targeted prostate biopsy, incorporating multi-parametric magnetic resonance imaging (mp-MRI) and its registration with ultrasound, is currently the state-of-the-art in prostate cancer diagnosis. The registration process in most targeted biopsy systems today relies heavily on accurate segmentation of ultrasound images. Automatic or semi-automatic segmentation is typically performed offline prior to the start of the biopsy procedure. In this paper, we present a deep neural network based real-time prostate segmentation technique during the biopsy procedure, hence paving the way for dynamic registration of mp-MRI and ultrasound data. In addition to using convolutional networks for extracting spatial features, the proposed approach employs recurrent networks to exploit the temporal information among a series of ultrasound images. One of the key contributions in the architecture is to use residual convolution in the recurrent networks to improve optimization. We also exploit recurrent connections within and across different layers of the deep networks to maximize the utilization of the temporal information. Furthermore, we perform dense and sparse sampling of the input ultrasound sequence to make the network robust to ultrasound artifacts. Our architecture is trained on 2,238 labeled transrectal ultrasound images, with an additional 637 and 1,017 unseen images used for validation and testing, respectively. We obtain a mean Dice similarity coefficient of 93%, a mean surface distance error of 1.10 mm and a mean Hausdorff distance error of 3.0 mm. A comparison of the reported results with those of a state-of-the-art technique indicates statistically significant improvement achieved by the proposed approach. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Neural network for intelligent query of an FBI forensic database

    NASA Astrophysics Data System (ADS)

    Uvanni, Lee A.; Rainey, Timothy G.; Balasubramanian, Uma; Brettle, Dean W.; Weingard, Fred; Sibert, Robert W.; Birnbaum, Eric

    1997-02-01

    Examiner is an automated fired cartridge case identification system utilizing a dual-use neural network pattern recognition technology, called the statistical-multiple object detection and location system (S-MODALS) developed by Booz(DOT)Allen & Hamilton, Inc. in conjunction with Rome Laboratory. S-MODALS was originally designed for automatic target recognition (ATR) of tactical and strategic military targets using multisensor fusion [electro-optical (EO), infrared (IR), and synthetic aperture radar (SAR)] sensors. Since S-MODALS is a learning system readily adaptable to problem domains other than automatic target recognition, the pattern matching problem of microscopic marks for firearms evidence was analyzed using S-MODALS. The physics; phenomenology; discrimination and search strategies; robustness requirements; error level and confidence level propagation that apply to the pattern matching problem of military targets were found to be applicable to the ballistic domain as well. The Examiner system uses S-MODALS to rank a set of queried cartridge case images from the most similar to the least similar image in reference to an investigative fired cartridge case image. The paper presents three independent tests and evaluation studies of the Examiner system utilizing the S-MODALS technology for the Federal Bureau of Investigation.

  20. Simple Smartphone-Based Guiding System for Visually Impaired People

    PubMed Central

    Lin, Bor-Shing; Lee, Cheng-Che; Chiang, Pei-Ying

    2017-01-01

    Visually impaired people are often unaware of dangers in front of them, even in familiar environments. Furthermore, in unfamiliar environments, such people require guidance to reduce the risk of colliding with obstacles. This study proposes a simple smartphone-based guiding system for solving the navigation problems for visually impaired people and achieving obstacle avoidance to enable visually impaired people to travel smoothly from a beginning point to a destination with greater awareness of their surroundings. In this study, a computer image recognition system and smartphone application were integrated to form a simple assisted guiding system. Two operating modes, online mode and offline mode, can be chosen depending on network availability. When the system begins to operate, the smartphone captures the scene in front of the user and sends the captured images to the backend server to be processed. The backend server uses the faster region convolutional neural network algorithm or the you only look once algorithm to recognize multiple obstacles in every image, and it subsequently sends the results back to the smartphone. The results of obstacle recognition in this study reached 60%, which is sufficient for assisting visually impaired people in realizing the types and locations of obstacles around them. PMID:28608811

  1. Simple Smartphone-Based Guiding System for Visually Impaired People.

    PubMed

    Lin, Bor-Shing; Lee, Cheng-Che; Chiang, Pei-Ying

    2017-06-13

    Visually impaired people are often unaware of dangers in front of them, even in familiar environments. Furthermore, in unfamiliar environments, such people require guidance to reduce the risk of colliding with obstacles. This study proposes a simple smartphone-based guiding system for solving the navigation problems for visually impaired people and achieving obstacle avoidance to enable visually impaired people to travel smoothly from a beginning point to a destination with greater awareness of their surroundings. In this study, a computer image recognition system and smartphone application were integrated to form a simple assisted guiding system. Two operating modes, online mode and offline mode, can be chosen depending on network availability. When the system begins to operate, the smartphone captures the scene in front of the user and sends the captured images to the backend server to be processed. The backend server uses the faster region convolutional neural network algorithm or the you only look once algorithm to recognize multiple obstacles in every image, and it subsequently sends the results back to the smartphone. The results of obstacle recognition in this study reached 60%, which is sufficient for assisting visually impaired people in realizing the types and locations of obstacles around them.

  2. In vivo integrated photoacoustic ophthalmoscopy, optical coherence tomography, and scanning laser ophthalmoscopy for retinal imaging

    NASA Astrophysics Data System (ADS)

    Song, Wei; Zhang, Rui; Zhang, Hao F.; Wei, Qing; Cao, Wenwu

    2012-12-01

    The physiological and pathological properties of retina are closely associated with various optical contrasts. Hence, integrating different ophthalmic imaging technologies is more beneficial in both fundamental investigation and clinical diagnosis of several blinding diseases. Recently, photoacoustic ophthalmoscopy (PAOM) was developed for in vivo retinal imaging in small animals, which demonstrated the capability of imaging retinal vascular networks and retinal pigment epithelium (RPE) at high sensitivity. We combined PAOM with traditional imaging modalities, such as fluorescein angiography (FA), spectral-domain optical coherence tomography (SD-OCT), and auto-fluorescence scanning laser ophthalmoscopy (AF-SLO), for imaging rats and mice. The multimodal imaging system provided more comprehensive evaluation of the retina based on the complementary imaging contrast mechanisms. The high-quality retinal images show that the integrated ophthalmic imaging system has great potential in the investigation of blinding disorders.

  3. Convolutional Neural Network-Based Finger-Vein Recognition Using NIR Image Sensors

    PubMed Central

    Hong, Hyung Gil; Lee, Min Beom; Park, Kang Ryoung

    2017-01-01

    Conventional finger-vein recognition systems perform recognition based on the finger-vein lines extracted from the input images or image enhancement, and texture feature extraction from the finger-vein images. In these cases, however, the inaccurate detection of finger-vein lines lowers the recognition accuracy. In the case of texture feature extraction, the developer must experimentally decide on a form of the optimal filter for extraction considering the characteristics of the image database. To address this problem, this research proposes a finger-vein recognition method that is robust to various database types and environmental changes based on the convolutional neural network (CNN). In the experiments using the two finger-vein databases constructed in this research and the SDUMLA-HMT finger-vein database, which is an open database, the method proposed in this research showed a better performance compared to the conventional methods. PMID:28587269

  4. Convolutional Neural Network-Based Finger-Vein Recognition Using NIR Image Sensors.

    PubMed

    Hong, Hyung Gil; Lee, Min Beom; Park, Kang Ryoung

    2017-06-06

    Conventional finger-vein recognition systems perform recognition based on the finger-vein lines extracted from the input images or image enhancement, and texture feature extraction from the finger-vein images. In these cases, however, the inaccurate detection of finger-vein lines lowers the recognition accuracy. In the case of texture feature extraction, the developer must experimentally decide on a form of the optimal filter for extraction considering the characteristics of the image database. To address this problem, this research proposes a finger-vein recognition method that is robust to various database types and environmental changes based on the convolutional neural network (CNN). In the experiments using the two finger-vein databases constructed in this research and the SDUMLA-HMT finger-vein database, which is an open database, the method proposed in this research showed a better performance compared to the conventional methods.

  5. Medical image diagnoses by artificial neural networks with image correlation, wavelet transform, simulated annealing

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.

    1993-09-01

    Classical artificial neural networks (ANN) and neurocomputing are reviewed for implementing a real time medical image diagnosis. An algorithm known as the self-reference matched filter that emulates the spatio-temporal integration ability of the human visual system might be utilized for multi-frame processing of medical imaging data. A Cauchy machine, implementing a fast simulated annealing schedule, can determine the degree of abnormality by the degree of orthogonality between the patient imagery and the class of features of healthy persons. An automatic inspection process based on multiple modality image sequences is simulated by incorporating the following new developments: (1) 1-D space-filling Peano curves to preserve the 2-D neighborhood pixels' relationship; (2) fast simulated Cauchy annealing for the global optimization of self-feature extraction; and (3) a mini-max energy function for the intra-inter cluster-segregation respectively useful for top-down ANN designs.

  6. Network image data bank prototype: the RSI project (Resume de Sortie Images)

    NASA Astrophysics Data System (ADS)

    Abouchi, Nacer; Jourlin, Michel; Bohbot, Oriana; Faurie, Catherine; Grisel, Richard

    1995-02-01

    The Hospital Edouard Herriot in Lyon and 3M company, associated with the Electronic Department of Physics Chimics and Electronic Engineering School (CPE), decided in 1993 to begin a study on a project of image network. This project is composed of many practical applications to be checked one by one. The purpose of this paper is to discuss the context, which is kind of small picture archiving and communication system (PACS), to explain the methodology which has been used related to hardware and software implementation, and to give examples of the first results obtained. One of the main interests of the results is the possibility to obtain on the same support, 3M laser imager, a film including images from different modalities and abstract summing up the patient stay in the hospital. The framework used is built around Omnis7 and C++ language on a PC computer.

  7. Transfer Learning with Convolutional Neural Networks for Classification of Abdominal Ultrasound Images.

    PubMed

    Cheng, Phillip M; Malhi, Harshawn S

    2017-04-01

    The purpose of this study is to evaluate transfer learning with deep convolutional neural networks for the classification of abdominal ultrasound images. Grayscale images from 185 consecutive clinical abdominal ultrasound studies were categorized into 11 categories based on the text annotation specified by the technologist for the image. Cropped images were rescaled to 256 × 256 resolution and randomized, with 4094 images from 136 studies constituting the training set, and 1423 images from 49 studies constituting the test set. The fully connected layers of two convolutional neural networks based on CaffeNet and VGGNet, previously trained on the 2012 Large Scale Visual Recognition Challenge data set, were retrained on the training set. Weights in the convolutional layers of each network were frozen to serve as fixed feature extractors. Accuracy on the test set was evaluated for each network. A radiologist experienced in abdominal ultrasound also independently classified the images in the test set into the same 11 categories. The CaffeNet network classified 77.3% of the test set images accurately (1100/1423 images), with a top-2 accuracy of 90.4% (1287/1423 images). The larger VGGNet network classified 77.9% of the test set accurately (1109/1423 images), with a top-2 accuracy of VGGNet was 89.7% (1276/1423 images). The radiologist classified 71.7% of the test set images correctly (1020/1423 images). The differences in classification accuracies between both neural networks and the radiologist were statistically significant (p < 0.001). The results demonstrate that transfer learning with convolutional neural networks may be used to construct effective classifiers for abdominal ultrasound images.

  8. Automatic detection and recognition of traffic signs in stereo images based on features and probabilistic neural networks

    NASA Astrophysics Data System (ADS)

    Sheng, Yehua; Zhang, Ka; Ye, Chun; Liang, Cheng; Li, Jian

    2008-04-01

    Considering the problem of automatic traffic sign detection and recognition in stereo images captured under motion conditions, a new algorithm for traffic sign detection and recognition based on features and probabilistic neural networks (PNN) is proposed in this paper. Firstly, global statistical color features of left image are computed based on statistics theory. Then for red, yellow and blue traffic signs, left image is segmented to three binary images by self-adaptive color segmentation method. Secondly, gray-value projection and shape analysis are used to confirm traffic sign regions in left image. Then stereo image matching is used to locate the homonymy traffic signs in right image. Thirdly, self-adaptive image segmentation is used to extract binary inner core shapes of detected traffic signs. One-dimensional feature vectors of inner core shapes are computed by central projection transformation. Fourthly, these vectors are input to the trained probabilistic neural networks for traffic sign recognition. Lastly, recognition results in left image are compared with recognition results in right image. If results in stereo images are identical, these results are confirmed as final recognition results. The new algorithm is applied to 220 real images of natural scenes taken by the vehicle-borne mobile photogrammetry system in Nanjing at different time. Experimental results show a detection and recognition rate of over 92%. So the algorithm is not only simple, but also reliable and high-speed on real traffic sign detection and recognition. Furthermore, it can obtain geometrical information of traffic signs at the same time of recognizing their types.

  9. Real-time camera-based face detection using a modified LAMSTAR neural network system

    NASA Astrophysics Data System (ADS)

    Girado, Javier I.; Sandin, Daniel J.; DeFanti, Thomas A.; Wolf, Laura K.

    2003-03-01

    This paper describes a cost-effective, real-time (640x480 at 30Hz) upright frontal face detector as part of an ongoing project to develop a video-based, tetherless 3D head position and orientation tracking system. The work is specifically targeted for auto-stereoscopic displays and projection-based virtual reality systems. The proposed face detector is based on a modified LAMSTAR neural network system. At the input stage, after achieving image normalization and equalization, a sub-window analyzes facial features using a neural network. The sub-window is segmented, and each part is fed to a neural network layer consisting of a Kohonen Self-Organizing Map (SOM). The output of the SOM neural networks are interconnected and related by correlation-links, and can hence determine the presence of a face with enough redundancy to provide a high detection rate. To avoid tracking multiple faces simultaneously, the system is initially trained to track only the face centered in a box superimposed on the display. The system is also rotationally and size invariant to a certain degree.

  10. Snapshot Imaging Spectrometry in the Visible and Long Wave Infrared

    NASA Astrophysics Data System (ADS)

    Maione, Bryan David

    Imaging spectrometry is an optical technique in which the spectral content of an object is measured at each location in space. The main advantage of this modality is that it enables characterization beyond what is possible with a conventional camera, since spectral information is generally related to the chemical composition of the object. Due to this, imaging spectrometers are often capable of detecting targets that are either morphologically inconsistent, or even under resolved. A specific class of imaging spectrometer, known as a snapshot system, seeks to measure all spatial and spectral information simultaneously, thereby rectifying artifacts associated with scanning designs, and enabling the measurement of temporally dynamic scenes. Snapshot designs are the focus of this dissertation. Three designs for snapshot imaging spectrometers are developed, each providing novel contributions to the field of imaging spectrometry. In chapter 2, the first spatially heterodyned snapshot imaging spectrometer is modeled and experimentally validated. Spatial heterodyning is a technique commonly implemented in non-imaging Fourier transform spectrometry. For Fourier transform imaging spectrometers, spatial heterodyning improves the spectral resolution trade space. Additionally, in this chapter a unique neural network based spectral calibration is developed and determined to be an improvement beyond Fourier and linear operator based techniques. Leveraging spatial heterodyning as developed in chapter 2, in chapter 3, a high spectral resolution snapshot Fourier transform imaging spectrometer, based on a Savart plate interferometer, is developed and experimentally validated. The sensor presented in this chapter is the highest spectral resolution sensor in its class. High spectral resolution enables the sensor to discriminate narrowly spaced spectral lines. The capabilities of neural networks in imaging spectrometry are further explored in this chapter. Neural networks are used to perform single target detection on raw instrument data, thereby eliminating the need for an explicit spectral calibration step. As an extension of the results in chapter 2, neural networks are once again demonstrated to be an improvement when compared to linear operator based detection. In chapter 4 a non-interferometric design is developed for the long wave infrared (wavelengths spanning 8-12 microns). The imaging spectrometer developed in this chapter is a multi-aperture filtered microbolometer. Since the detector is uncooled, the presented design is ultra-compact and low power. Additionally, cost effective polymer absorption filters are used in lieu of interference filters. Since, each measurement of the system is spectrally multiplexed, an SNR advantage is realized. A theoretical model for the filtered design is developed, and the performance of the sensor for detecting liquid contaminants is investigated. Similar to past chapters, neural networks are used and achieve false detection rates of less than 1%. Lastly, this dissertation is concluded with a discussion on future work and potential impact of these devices.

  11. NutriNet: A Deep Learning Food and Drink Image Recognition System for Dietary Assessment.

    PubMed

    Mezgec, Simon; Koroušić Seljak, Barbara

    2017-06-27

    Automatic food image recognition systems are alleviating the process of food-intake estimation and dietary assessment. However, due to the nature of food images, their recognition is a particularly challenging task, which is why traditional approaches in the field have achieved a low classification accuracy. Deep neural networks have outperformed such solutions, and we present a novel approach to the problem of food and drink image detection and recognition that uses a newly-defined deep convolutional neural network architecture, called NutriNet. This architecture was tuned on a recognition dataset containing 225,953 512 × 512 pixel images of 520 different food and drink items from a broad spectrum of food groups, on which we achieved a classification accuracy of 86 . 72 % , along with an accuracy of 94 . 47 % on a detection dataset containing 130 , 517 images. We also performed a real-world test on a dataset of self-acquired images, combined with images from Parkinson's disease patients, all taken using a smartphone camera, achieving a top-five accuracy of 55 % , which is an encouraging result for real-world images. Additionally, we tested NutriNet on the University of Milano-Bicocca 2016 (UNIMIB2016) food image dataset, on which we improved upon the provided baseline recognition result. An online training component was implemented to continually fine-tune the food and drink recognition model on new images. The model is being used in practice as part of a mobile app for the dietary assessment of Parkinson's disease patients.

  12. DICOM: a standard for medical imaging

    NASA Astrophysics Data System (ADS)

    Horii, Steven C.; Bidgood, W. Dean

    1993-01-01

    Since 1983, the American College of Radiology (ACR) and the National Electrical Manufacturers Association (NEMA) have been engaged in developing standards related to medical imaging. This alliance of users and manufacturers was formed to meet the needs of the medical imaging community as its use of digital imaging technology increased. The development of electronic picture archiving and communications systems (PACS), which could connect a number of medical imaging devices together in a network, led to the need for a standard interface and data structure for use on imaging equipment. Since medical image files tend to be very large and include much text information along with the image, the need for a fast, flexible, and extensible standard was quickly established. The ACR-NEMA Digital Imaging and Communications Standards Committee developed a standard which met these needs. The standard (ACR-NEMA 300-1988) was first published in 1985 and revised in 1988. It is increasingly available from equipment manufacturers. The current work of the ACR- NEMA Committee has been to extend the standard to incorporate direct network connection features, and build on standards work done by the International Standards Organization in its Open Systems Interconnection series. This new standard, called Digital Imaging and Communication in Medicine (DICOM), follows an object-oriented design methodology and makes use of as many existing internationally accepted standards as possible. This paper gives a brief overview of the requirements for communications standards in medical imaging, a history of the ACR-NEMA effort and what it has produced, and a description of the DICOM standard.

  13. Neural system applied on an invariant industrial character recognition

    NASA Astrophysics Data System (ADS)

    Lecoeuche, Stephane; Deguillemont, Denis; Dubus, Jean-Paul

    1997-04-01

    Besides the variety of fonts, character recognition systems for the industrial world are confronted with specific problems like: the variety of support (metal, wood, paper, ceramics . . .) as well as the variety of marking (printing, engraving, . . .) and conditions of lighting. We present a system that is able to solve a part of this problem. It implements a collaboration between two neural networks. The first network specialized in vision allows the system to extract the character from an image. Besides this capability, we have equipped our system with characteristics allowing it to obtain an invariant model from the presented character. Thus, whatever the position, the size and the orientation of the character during the capture are, the model presented to the input of the second network will be identical. The second network, thanks to a learning phase, permits us to obtain a character recognition system independent of the type of fonts used. Furthermore, its capabilities of generalization permit us to recognize degraded and/or distorted characters. A feedback loop between the two networks permits the first one to modify the quality of vision.The cooperation between these two networks allows us to recognize characters whatever the support and the marking.

  14. Holographic neural networks versus conventional neural networks: a comparative evaluation for the classification of landmine targets in ground-penetrating radar images

    NASA Astrophysics Data System (ADS)

    Mudigonda, Naga R.; Kacelenga, Ray; Edwards, Mark

    2004-09-01

    This paper evaluates the performance of a holographic neural network in comparison with a conventional feedforward backpropagation neural network for the classification of landmine targets in ground penetrating radar images. The data used in the study was acquired from four different test sites using the landmine detection system developed by General Dynamics Canada Ltd., in collaboration with the Defense Research and Development Canada, Suffield. A set of seven features extracted for each detected alarm is used as stimulus inputs for the networks. The recall responses of the networks are then evaluated against the ground truth to declare true or false detections. The area computed under the receiver operating characteristic curve is used for comparative purposes. With a large dataset comprising of data from multiple sites, both the holographic and conventional networks showed comparable trends in recall accuracies with area values of 0.88 and 0.87, respectively. By using independent validation datasets, the holographic network"s generalization performance was observed to be better (mean area = 0.86) as compared to the conventional network (mean area = 0.82). Despite the widely publicized theoretical advantages of the holographic technology, use of more than the required number of cortical memory elements resulted in an over-fitting phenomenon of the holographic network.

  15. Altered Whole-Brain and Network-Based Functional Connectivity in Parkinson's Disease.

    PubMed

    de Schipper, Laura J; Hafkemeijer, Anne; van der Grond, Jeroen; Marinus, Johan; Henselmans, Johanna M L; van Hilten, Jacobus J

    2018-01-01

    Background: Functional imaging methods, such as resting-state functional magnetic resonance imaging, reflect changes in neural connectivity and may help to assess the widespread consequences of disease-specific network changes in Parkinson's disease. In this study we used a relatively new graph analysis approach in functional imaging: eigenvector centrality mapping. This model-free method, applied to all voxels in the brain, identifies prominent regions in the brain network hierarchy and detects localized differences between patient populations. In other neurological disorders, eigenvector centrality mapping has been linked to changes in functional connectivity in certain nodes of brain networks. Objectives: Examining changes in functional brain connectivity architecture on a whole brain and network level in patients with Parkinson's disease. Methods: Whole brain resting-state functional architecture was studied with a recently introduced graph analysis approach (eigenvector centrality mapping). Functional connectivity was further investigated in relation to eight known resting-state networks. Cross-sectional analyses included group comparison of functional connectivity measures of Parkinson's disease patients ( n = 107) with control subjects ( n = 58) and correlations with clinical data, including motor and cognitive impairment and a composite measure of predominantly non-dopaminergic symptoms. Results: Eigenvector centrality mapping revealed that frontoparietal regions were more prominent in the whole-brain network function in patients compared to control subjects, while frontal and occipital brain areas were less prominent in patients. Using standard resting-state networks, we found predominantly increased functional connectivity, namely within sensorimotor system and visual networks in patients. Regional group differences in functional connectivity of both techniques between patients and control subjects partly overlapped for highly connected posterior brain regions, in particular in the posterior cingulate cortex and precuneus. Clinico-functional imaging relations were not found. Conclusions: Changes on the level of functional brain connectivity architecture might provide a different perspective of pathological consequences of Parkinson's disease. The involvement of specific, highly connected (hub) brain regions may influence whole brain functional network architecture in Parkinson's disease.

  16. Cerebral vessels segmentation for light-sheet microscopy image using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Hu, Chaoen; Hui, Hui; Wang, Shuo; Dong, Di; Liu, Xia; Yang, Xin; Tian, Jie

    2017-03-01

    Cerebral vessel segmentation is an important step in image analysis for brain function and brain disease studies. To extract all the cerebrovascular patterns, including arteries and capillaries, some filter-based methods are used to segment vessels. However, the design of accurate and robust vessel segmentation algorithms is still challenging, due to the variety and complexity of images, especially in cerebral blood vessel segmentation. In this work, we addressed a problem of automatic and robust segmentation of cerebral micro-vessels structures in cerebrovascular images acquired by light-sheet microscope for mouse. To segment micro-vessels in large-scale image data, we proposed a convolutional neural networks (CNNs) architecture trained by 1.58 million pixels with manual label. Three convolutional layers and one fully connected layer were used in the CNNs model. We extracted a patch of size 32x32 pixels in each acquired brain vessel image as training data set to feed into CNNs for classification. This network was trained to output the probability that the center pixel of input patch belongs to vessel structures. To build the CNNs architecture, a series of mouse brain vascular images acquired from a commercial light sheet fluorescence microscopy (LSFM) system were used for training the model. The experimental results demonstrated that our approach is a promising method for effectively segmenting micro-vessels structures in cerebrovascular images with vessel-dense, nonuniform gray-level and long-scale contrast regions.

  17. Smart sensing surveillance system

    NASA Astrophysics Data System (ADS)

    Hsu, Charles; Chu, Kai-Dee; O'Looney, James; Blake, Michael; Rutar, Colleen

    2010-04-01

    An effective public safety sensor system for heavily-populated applications requires sophisticated and geographically-distributed infrastructures, centralized supervision, and deployment of large-scale security and surveillance networks. Artificial intelligence in sensor systems is a critical design to raise awareness levels, improve the performance of the system and adapt to a changing scenario and environment. In this paper, a highly-distributed, fault-tolerant, and energy-efficient Smart Sensing Surveillance System (S4) is presented to efficiently provide a 24/7 and all weather security operation in crowded environments or restricted areas. Technically, the S4 consists of a number of distributed sensor nodes integrated with specific passive sensors to rapidly collect, process, and disseminate heterogeneous sensor data from near omni-directions. These distributed sensor nodes can cooperatively work to send immediate security information when new objects appear. When the new objects are detected, the S4 will smartly select the available node with a Pan- Tilt- Zoom- (PTZ) Electro-Optics EO/IR camera to track the objects and capture associated imagery. The S4 provides applicable advanced on-board digital image processing capabilities to detect and track the specific objects. The imaging detection operations include unattended object detection, human feature and behavior detection, and configurable alert triggers, etc. Other imaging processes can be updated to meet specific requirements and operations. In the S4, all the sensor nodes are connected with a robust, reconfigurable, LPI/LPD (Low Probability of Intercept/ Low Probability of Detect) wireless mesh network using Ultra-wide band (UWB) RF technology. This UWB RF technology can provide an ad-hoc, secure mesh network and capability to relay network information, communicate and pass situational awareness and messages. The Service Oriented Architecture of S4 enables remote applications to interact with the S4 network and use the specific presentation methods. In addition, the S4 is compliant with Open Geospatial Consortium - Sensor Web Enablement (OGC-SWE) standards to efficiently discover, access, use, and control heterogeneous sensors and their metadata. These S4 capabilities and technologies have great potential for both military and civilian applications, enabling highly effective security support tools for improving surveillance activities in densely crowded environments. The S4 system is directly applicable to solutions for emergency response personnel, law enforcement, and other homeland security missions, as well as in applications requiring the interoperation of sensor networks with handheld or body-worn interface devices.

  18. Distributed health care imaging information systems

    NASA Astrophysics Data System (ADS)

    Thompson, Mary R.; Johnston, William E.; Guojun, Jin; Lee, Jason; Tierney, Brian; Terdiman, Joseph F.

    1997-05-01

    We have developed an ATM network-based system to collect and catalogue cardio-angiogram videos from the source at a Kaiser central facility and make them available for viewing by doctors at primary care Kaiser facilities. This an example of the general problem of diagnostic data being generated at tertiary facilities, while the images, or other large data objects they produce, need to be used from a variety of other locations such as doctor's offices or local hospitals. We describe the use of a highly distributed computing and storage architecture to provide all aspects of collecting, storing, analyzing, and accessing such large data-objects in a metropolitan area ATM network. Our large data-object management system provides network interface between the object sources, the data management system and the user of the data. As the data is being stored, a cataloguing system automatically creates and stores condensed versions of the data, textural metadata and pointers to the original data. The catalogue system provides a Web-based graphical interface to the data. The user is able the view the low-resolution data with a standard Internet connection and Web browser. If high-resolution is required, a high-speed connection and special application programs can be used to view the high-resolution original data.

  19. Imaging structural and functional brain networks in temporal lobe epilepsy

    PubMed Central

    Bernhardt, Boris C.; Hong, SeokJun; Bernasconi, Andrea; Bernasconi, Neda

    2013-01-01

    Early imaging studies in temporal lobe epilepsy (TLE) focused on the search for mesial temporal sclerosis, as its surgical removal results in clinically meaningful improvement in about 70% of patients. Nevertheless, a considerable subgroup of patients continues to suffer from post-operative seizures. Although the reasons for surgical failure are not fully understood, electrophysiological and imaging data suggest that anomalies extending beyond the temporal lobe may have negative impact on outcome. This hypothesis has revived the concept of human epilepsy as a disorder of distributed brain networks. Recent methodological advances in non-invasive neuroimaging have led to quantify structural and functional networks in vivo. While structural networks can be inferred from diffusion MRI tractography and inter-regional covariance patterns of structural measures such as cortical thickness, functional connectivity is generally computed based on statistical dependencies of neurophysiological time-series, measured through functional MRI or electroencephalographic techniques. This review considers the application of advanced analytical methods in structural and functional connectivity analyses in TLE. We will specifically highlight findings from graph-theoretical analysis that allow assessing the topological organization of brain networks. These studies have provided compelling evidence that TLE is a system disorder with profound alterations in local and distributed networks. In addition, there is emerging evidence for the utility of network properties as clinical diagnostic markers. Nowadays, a network perspective is considered to be essential to the understanding of the development, progression, and management of epilepsy. PMID:24098281

  20. Mars Rover imaging systems and directional filtering

    NASA Technical Reports Server (NTRS)

    Wang, Paul P.

    1989-01-01

    Computer literature searches were carried out at Duke University and NASA Langley Research Center. The purpose is to enhance personal knowledge based on the technical problems of pattern recognition and image understanding which must be solved for the Mars Rover and Sample Return Mission. Intensive study effort of a large collection of relevant literature resulted in a compilation of all important documents in one place. Furthermore, the documents are being classified into: Mars Rover; computer vision (theory); imaging systems; pattern recognition methodologies; and other smart techniques (AI, neural networks, fuzzy logic, etc).

  1. Delineation of a Re-establishing Drainage Network Using SPOT and Landsat Images

    NASA Astrophysics Data System (ADS)

    Bailey, J. E.; Self, S.; Mouginis-Mark, P. J.

    2008-12-01

    The 1991 eruption of Mt. Pinatubo, The Philippines, provided a unique opportunity to study the effects on the landscape of a large eruption in part because it took place after the advent of regular satellite-based observations. The eruption formed one large (>100km2) ignimbrite sheet, with over 70% of the total deposit deposited in three primary drainage basins to the west of the volcano. High-resolution (20 m/pixel) satellite images, showing the western drainage basins and surrounding region both before and after the eruption were used to observe the re-establishment and evolution of drainage networks on the newly emplaced ignimbrite sheet. Changes in the drainage networks were delineated from a time series of SPOT (Satellite Pour l'Observation de la Terre) and Landsat multi-spectral satellite images. The analysis of which was supplemented by ground- based observations. The satellite images showed that the blue prints for the new drainage systems were established early (within days of the eruption) and at a large-scale followed the pre-eruption pattern. However, the images also illustrated the ephemeral nature of many channels due to the influence of secondary pyroclastic flows, lahar- dammed lake breakouts, stream piracy and shifts due to erosion. Characteristics of the defined drainage networks were used to infer the relative influence on the lahar hazard within each drainage basin.

  2. High-performance web viewer for cardiac images

    NASA Astrophysics Data System (ADS)

    dos Santos, Marcelo; Furuie, Sergio S.

    2004-04-01

    With the advent of the digital devices for medical diagnosis the use of the regular films in radiology has decreased. Thus, the management and handling of medical images in digital format has become an important and critical task. In Cardiology, for example, the main difficulty is to display dynamic images with the appropriated color palette and frame rate used on acquisition process by Cath, Angio and Echo systems. In addition, other difficulty is handling large images in memory by any existing personal computer, including thin clients. In this work we present a web-based application that carries out these tasks with robustness and excellent performance, without burdening the server and network. This application provides near-diagnostic quality display of cardiac images stored as DICOM 3.0 files via a web browser and provides a set of resources that allows the viewing of still and dynamic images. It can access image files from the local disks, or network connection. Its features include: allows real-time playback, dynamic thumbnails image viewing during loading, access to patient database information, image processing tools, linear and angular measurements, on-screen annotations, image printing and exporting DICOM images to other image formats, and many others, all characterized by a pleasant user-friendly interface, inside a Web browser by means of a Java application. This approach offers some advantages over the most of medical images viewers, such as: facility of installation, integration with other systems by means of public and standardized interfaces, platform independence, efficient manipulation and display of medical images, all with high performance.

  3. [Online-conference using JGN.].

    PubMed

    Nakayama, Kazuya; Kojima, Kazuhiko; Suzuki, Masayuki; Kikuchi, Yuzo; Iwahara, Masayoshi; Matsui, Osamu; Noguchi, Masato

    2004-01-01

    Telemedicine and online conference systems have some benefits so that equalizing medical level, improving efficiency of medical care and improving service for patients. It is possible to give advice and to support its medical projects stationed in other facility and to provide the same quality treatments for patients. In this paper, we set up an experimental network system to teleconference using JGN (Japan Gigabit Network) and tried to discussion alternatively for case study between Kanazawa university and Fukui red cross hospital, 70 km away. The JGN used in this study is an ultra-high-speed network for the purpose of research and development. Kanazawa university, and Fukui red cross hospital are connected by a 10 Mbps communication link of the JGN. We tried online conference on the experimental network using video chat system. In result, using video chat system, the average transmission rate of MRI images (256 X 256pixel, 16bit) is 0.2 s/frame.

  4. 3D printing application and numerical simulations in a fracture system

    NASA Astrophysics Data System (ADS)

    Yoon, H.; Martinez, M. J.

    2017-12-01

    The hydrogeological and mechanical properties in fractured and porous media are fundamental to predicting coupled multiphysics processes in the subsurface. Recent advances in experimental methods and multi-scale imaging capabilities have revolutionized our ability to quantitatively characterize geomaterials and digital counterparts are now routinely used for numerical simulations to characterize petrophysical and mechanical properties across scales. 3D printing is a very effective and creative technique that reproduce the digital images in a controlled way. For geoscience applications, 3D printing can be co-opted to print reproducible porous and fractured structures derived from CT-imaging of actual rocks and theoretical algorithms for experimental testing. In this work we used a stereolithography (SLA) method to create a single fracture network. The fracture in shale was first scanned using a microCT system and then the digital fracture network was printed into two parts and assembled. Aperture ranges from 0.3 to 1 mm. In particular, we discuss the design of single fracture network and the progress of printing practices to reproduce the fracture network system. Printed samples at different scales are used to measure the permeability and surface roughness. Various numerical simulations including (non-)reactive transport and multiphase flow cases are performed to study fluid flow characterization. We will also discuss the innovative advancement of 3D printing techniques applicable for coupled processes in the subsurface. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.

  5. A patch-based convolutional neural network for remote sensing image classification.

    PubMed

    Sharma, Atharva; Liu, Xiuwen; Yang, Xiaojun; Shi, Di

    2017-11-01

    Availability of accurate land cover information over large areas is essential to the global environment sustainability; digital classification using medium-resolution remote sensing data would provide an effective method to generate the required land cover information. However, low accuracy of existing per-pixel based classification methods for medium-resolution data is a fundamental limiting factor. While convolutional neural networks (CNNs) with deep layers have achieved unprecedented improvements in object recognition applications that rely on fine image structures, they cannot be applied directly to medium-resolution data due to lack of such fine structures. In this paper, considering the spatial relation of a pixel to its neighborhood, we propose a new deep patch-based CNN system tailored for medium-resolution remote sensing data. The system is designed by incorporating distinctive characteristics of medium-resolution data; in particular, the system computes patch-based samples from multidimensional top of atmosphere reflectance data. With a test site from the Florida Everglades area (with a size of 771 square kilometers), the proposed new system has outperformed pixel-based neural network, pixel-based CNN and patch-based neural network by 24.36%, 24.23% and 11.52%, respectively, in overall classification accuracy. By combining the proposed deep CNN and the huge collection of medium-resolution remote sensing data, we believe that much more accurate land cover datasets can be produced over large areas. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Co-trained convolutional neural networks for automated detection of prostate cancer in multi-parametric MRI.

    PubMed

    Yang, Xin; Liu, Chaoyue; Wang, Zhiwei; Yang, Jun; Min, Hung Le; Wang, Liang; Cheng, Kwang-Ting Tim

    2017-12-01

    Multi-parameter magnetic resonance imaging (mp-MRI) is increasingly popular for prostate cancer (PCa) detection and diagnosis. However, interpreting mp-MRI data which typically contains multiple unregistered 3D sequences, e.g. apparent diffusion coefficient (ADC) and T2-weighted (T2w) images, is time-consuming and demands special expertise, limiting its usage for large-scale PCa screening. Therefore, solutions to computer-aided detection of PCa in mp-MRI images are highly desirable. Most recent advances in automated methods for PCa detection employ a handcrafted feature based two-stage classification flow, i.e. voxel-level classification followed by a region-level classification. This work presents an automated PCa detection system which can concurrently identify the presence of PCa in an image and localize lesions based on deep convolutional neural network (CNN) features and a single-stage SVM classifier. Specifically, the developed co-trained CNNs consist of two parallel convolutional networks for ADC and T2w images respectively. Each network is trained using images of a single modality in a weakly-supervised manner by providing a set of prostate images with image-level labels indicating only the presence of PCa without priors of lesions' locations. Discriminative visual patterns of lesions can be learned effectively from clutters of prostate and surrounding tissues. A cancer response map with each pixel indicating the likelihood to be cancerous is explicitly generated at the last convolutional layer of the network for each modality. A new back-propagated error E is defined to enforce both optimized classification results and consistent cancer response maps for different modalities, which help capture highly representative PCa-relevant features during the CNN feature learning process. The CNN features of each modality are concatenated and fed into a SVM classifier. For images which are classified to contain cancers, non-maximum suppression and adaptive thresholding are applied to the corresponding cancer response maps for PCa foci localization. Evaluation based on 160 patient data with 12-core systematic TRUS-guided prostate biopsy as the reference standard demonstrates that our system achieves a sensitivity of 0.46, 0.92 and 0.97 at 0.1, 1 and 10 false positives per normal/benign patient which is significantly superior to two state-of-the-art CNN-based methods (Oquab et al., 2015; Zhou et al., 2015) and 6-core systematic prostate biopsies. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.

    1991-01-01

    Research activity has shifted from computer graphics and vision systems to the broader scope of applying concepts of artificial intelligence to robotics. Specifically, the research is directed toward developing Artificial Neural Networks, Expert Systems, and Laser Imaging Techniques for Autonomous Space Robots.

  8. A semi-automatic method for extracting thin line structures in images as rooted tree network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brazzini, Jacopo; Dillard, Scott; Soille, Pierre

    2010-01-01

    This paper addresses the problem of semi-automatic extraction of line networks in digital images - e.g., road or hydrographic networks in satellite images, blood vessels in medical images, robust. For that purpose, we improve a generic method derived from morphological and hydrological concepts and consisting in minimum cost path estimation and flow simulation. While this approach fully exploits the local contrast and shape of the network, as well as its arborescent nature, we further incorporate local directional information about the structures in the image. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the targetmore » network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given seed with this metric is combined with hydrological operators for overland flow simulation to extract the line network. The algorithm is demonstrated for the extraction of blood vessels in a retina image and of a river network in a satellite image.« less

  9. The ART of representation: Memory reduction and noise tolerance in a neural network vision system

    NASA Astrophysics Data System (ADS)

    Langley, Christopher S.

    The Feature Cerebellar Model Arithmetic Computer (FCMAC) is a multiple-input-single-output neural network that can provide three-degree-of-freedom (3-DOF) pose estimation for a robotic vision system. The FCMAC provides sufficient accuracy to enable a manipulator to grasp an object from an arbitrary pose within its workspace. The network learns an appearance-based representation of an object by storing coarsely quantized feature patterns. As all unique patterns are encoded, the network size grows uncontrollably. A new architecture is introduced herein, which combines the FCMAC with an Adaptive Resonance Theory (ART) network. The ART module categorizes patterns observed during training into a set of prototypes that are used to build the FCMAC. As a result, the network no longer grows without bound, but constrains itself to a user-specified size. Pose estimates remain accurate since the ART layer tends to discard the least relevant information first. The smaller network performs recall faster, and in some cases is better for generalization, resulting in a reduction of error at recall time. The ART-Under-Constraint (ART-C) algorithm is extended to include initial filling with randomly selected patterns (referred to as ART-F). In experiments using a real-world data set, the new network performed equally well using less than one tenth the number of coarse patterns as a regular FCMAC. The FCMAC is also extended to include real-valued input activations. As a result, the network can be tuned to reject a variety of types of noise in the image feature detection. A quantitative analysis of noise tolerance was performed using four synthetic noise algorithms, and a qualitative investigation was made using noisy real-world image data. In validation experiments, the FCMAC system outperformed Radial Basis Function (RBF) networks for the 3-DOF problem, and had accuracy comparable to that of Principal Component Analysis (PCA) and superior to that of Shape Context Matching (SCM), both of which estimate orientation only.

  10. Visual identification system for homeland security and law enforcement support

    NASA Astrophysics Data System (ADS)

    Samuel, Todd J.; Edwards, Don; Knopf, Michael

    2005-05-01

    This paper describes the basic configuration for a visual identification system (VIS) for Homeland Security and law enforcement support. Security and law enforcement systems with an integrated VIS will accurately and rapidly provide identification of vehicles or containers that have entered, exited or passed through a specific monitoring location. The VIS system stores all images and makes them available for recall for approximately one week. Images of alarming vehicles will be archived indefinitely as part of the alarming vehicle"s or cargo container"s record. Depending on user needs, the digital imaging information will be provided electronically to the individual inspectors, supervisors, and/or control center at the customer"s office. The key components of the VIS are the high-resolution cameras that capture images of vehicles, lights, presence sensors, image cataloging software, and image recognition software. In addition to the cameras, the physical integration and network communications of the VIS components with the balance of the security system and client must be ensured.

  11. Automatic Welding System of Aluminum Pipe by Monitoring Backside Image of Molten Pool Using Vision Sensor

    NASA Astrophysics Data System (ADS)

    Baskoro, Ario Sunar; Kabutomori, Masashi; Suga, Yasuo

    An automatic welding system using Tungsten Inert Gas (TIG) welding with vision sensor for welding of aluminum pipe was constructed. This research studies the intelligent welding process of aluminum alloy pipe 6063S-T5 in fixed position and moving welding torch with the AC welding machine. The monitoring system consists of a vision sensor using a charge-coupled device (CCD) camera to monitor backside image of molten pool. The captured image was processed to recognize the edge of molten pool by image processing algorithm. Neural network model for welding speed control were constructed to perform the process automatically. From the experimental results it shows the effectiveness of the control system confirmed by good detection of molten pool and sound weld of experimental result.

  12. The algorithm study for using the back propagation neural network in CT image segmentation

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Liu, Jie; Chen, Chen; Li, Ying Qi

    2017-01-01

    Back propagation neural network(BP neural network) is a type of multi-layer feed forward network which spread positively, while the error spread backwardly. Since BP network has advantages in learning and storing the mapping between a large number of input and output layers without complex mathematical equations to describe the mapping relationship, it is most widely used. BP can iteratively compute the weight coefficients and thresholds of the network based on the training and back propagation of samples, which can minimize the error sum of squares of the network. Since the boundary of the computed tomography (CT) heart images is usually discontinuous, and it exist large changes in the volume and boundary of heart images, The conventional segmentation such as region growing and watershed algorithm can't achieve satisfactory results. Meanwhile, there are large differences between the diastolic and systolic images. The conventional methods can't accurately classify the two cases. In this paper, we introduced BP to handle the segmentation of heart images. We segmented a large amount of CT images artificially to obtain the samples, and the BP network was trained based on these samples. To acquire the appropriate BP network for the segmentation of heart images, we normalized the heart images, and extract the gray-level information of the heart. Then the boundary of the images was input into the network to compare the differences between the theoretical output and the actual output, and we reinput the errors into the BP network to modify the weight coefficients of layers. Through a large amount of training, the BP network tend to be stable, and the weight coefficients of layers can be determined, which means the relationship between the CT images and the boundary of heart.

  13. Computer-Aided Diagnosis Based on Convolutional Neural Network System for Colorectal Polyp Classification: Preliminary Experience.

    PubMed

    Komeda, Yoriaki; Handa, Hisashi; Watanabe, Tomohiro; Nomura, Takanobu; Kitahashi, Misaki; Sakurai, Toshiharu; Okamoto, Ayana; Minami, Tomohiro; Kono, Masashi; Arizumi, Tadaaki; Takenaka, Mamoru; Hagiwara, Satoru; Matsui, Shigenaga; Nishida, Naoshi; Kashida, Hiroshi; Kudo, Masatoshi

    2017-01-01

    Computer-aided diagnosis (CAD) is becoming a next-generation tool for the diagnosis of human disease. CAD for colon polyps has been suggested as a particularly useful tool for trainee colonoscopists, as the use of a CAD system avoids the complications associated with endoscopic resections. In addition to conventional CAD, a convolutional neural network (CNN) system utilizing artificial intelligence (AI) has been developing rapidly over the past 5 years. We attempted to generate a unique CNN-CAD system with an AI function that studied endoscopic images extracted from movies obtained with colonoscopes used in routine examinations. Here, we report our preliminary results of this novel CNN-CAD system for the diagnosis of colon polyps. A total of 1,200 images from cases of colonoscopy performed between January 2010 and December 2016 at Kindai University Hospital were used. These images were extracted from the video of actual endoscopic examinations. Additional video images from 10 cases of unlearned processes were retrospectively assessed in a pilot study. They were simply diagnosed as either an adenomatous or nonadenomatous polyp. The number of images used by AI to learn to distinguish adenomatous from nonadenomatous was 1,200:600. These images were extracted from the videos of actual endoscopic examinations. The size of each image was adjusted to 256 × 256 pixels. A 10-hold cross-validation was carried out. The accuracy of the 10-hold cross-validation is 0.751, where the accuracy is the ratio of the number of correct answers over the number of all the answers produced by the CNN. The decisions by the CNN were correct in 7 of 10 cases. A CNN-CAD system using routine colonoscopy might be useful for the rapid diagnosis of colorectal polyp classification. Further prospective studies in an in vivo setting are required to confirm the effectiveness of a CNN-CAD system in routine colonoscopy. © 2017 S. Karger AG, Basel.

  14. Segmentation of pulmonary nodules in computed tomography using a regression neural network approach and its application to the Lung Image Database Consortium and Image Database Resource Initiative dataset.

    PubMed

    Messay, Temesguen; Hardie, Russell C; Tuinstra, Timothy R

    2015-05-01

    We present new pulmonary nodule segmentation algorithms for computed tomography (CT). These include a fully-automated (FA) system, a semi-automated (SA) system, and a hybrid system. Like most traditional systems, the new FA system requires only a single user-supplied cue point. On the other hand, the SA system represents a new algorithm class requiring 8 user-supplied control points. This does increase the burden on the user, but we show that the resulting system is highly robust and can handle a variety of challenging cases. The proposed hybrid system starts with the FA system. If improved segmentation results are needed, the SA system is then deployed. The FA segmentation engine has 2 free parameters, and the SA system has 3. These parameters are adaptively determined for each nodule in a search process guided by a regression neural network (RNN). The RNN uses a number of features computed for each candidate segmentation. We train and test our systems using the new Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) data. To the best of our knowledge, this is one of the first nodule-specific performance benchmarks using the new LIDC-IDRI dataset. We also compare the performance of the proposed methods with several previously reported results on the same data used by those other methods. Our results suggest that the proposed FA system improves upon the state-of-the-art, and the SA system offers a considerable boost over the FA system. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  15. Spaceborne synthetic aperture radar signal processing using FPGAs

    NASA Astrophysics Data System (ADS)

    Sugimoto, Yohei; Ozawa, Satoru; Inaba, Noriyasu

    2017-10-01

    Synthetic Aperture Radar (SAR) imagery requires image reproduction through successive signal processing of received data before browsing images and extracting information. The received signal data records of the ALOS-2/PALSAR-2 are stored in the onboard mission data storage and transmitted to the ground. In order to compensate the storage usage and the capacity of transmission data through the mission date communication networks, the operation duty of the PALSAR-2 is limited. This balance strongly relies on the network availability. The observation operations of the present spaceborne SAR systems are rigorously planned by simulating the mission data balance, given conflicting user demands. This problem should be solved such that we do not have to compromise the operations and the potential of the next-generation spaceborne SAR systems. One of the solutions is to compress the SAR data through onboard image reproduction and information extraction from the reproduced images. This is also beneficial for fast delivery of information products and event-driven observations by constellation. The Emergence Studio (Sōhatsu kōbō in Japanese) with Japan Aerospace Exploration Agency is developing evaluation models of FPGA-based signal processing system for onboard SAR image reproduction. The model, namely, "Fast L1 Processor (FLIP)" developed in 2016 can reproduce a 10m-resolution single look complex image (Level 1.1) from ALOS/PALSAR raw signal data (Level 1.0). The processing speed of the FLIP at 200 MHz results in twice faster than CPU-based computing at 3.7 GHz. The image processed by the FLIP is no way inferior to the image processed with 32-bit computing in MATLAB.

  16. Kinetic signature of fractal-like filament networks formed by orientational linear epitaxy.

    PubMed

    Hwang, Wonmuk; Eryilmaz, Esma

    2014-07-11

    We study a broad class of epitaxial assembly of filament networks on lattice surfaces. Over time, a scale-free behavior emerges with a 2.5-3 power-law exponent in filament length distribution. Partitioning between the power-law and exponential behaviors in a network can be used to find the stage and kinetic parameters of the assembly process. To analyze real-world networks, we develop a computer program that measures the network architecture in experimental images. Application to triaxial networks of collagen fibrils shows quantitative agreement with our model. Our unifying approach can be used for characterizing and controlling the network formation that is observed across biological and nonbiological systems.

  17. Open architecture of smart sensor suites

    NASA Astrophysics Data System (ADS)

    Müller, Wilmuth; Kuwertz, Achim; Grönwall, Christina; Petersson, Henrik; Dekker, Rob; Reinert, Frank; Ditzel, Maarten

    2017-10-01

    Experiences from recent conflicts show the strong need for smart sensor suites comprising different multi-spectral imaging sensors as core elements as well as additional non-imaging sensors. Smart sensor suites should be part of a smart sensor network - a network of sensors, databases, evaluation stations and user terminals. Its goal is to optimize the use of various information sources for military operations such as situation assessment, intelligence, surveillance, reconnaissance, target recognition and tracking. Such a smart sensor network will enable commanders to achieve higher levels of situational awareness. Within the study at hand, an open system architecture was developed in order to increase the efficiency of sensor suites. The open system architecture for smart sensor suites, based on a system-of-systems approach, enables combining different sensors in multiple physical configurations, such as distributed sensors, co-located sensors combined in a single package, tower-mounted sensors, sensors integrated in a mobile platform, and trigger sensors. The architecture was derived from a set of system requirements and relevant scenarios. Its mode of operation is adaptable to a series of scenarios with respect to relevant objects of interest, activities to be observed, available transmission bandwidth, etc. The presented open architecture is designed in accordance with the NATO Architecture Framework (NAF). The architecture allows smart sensor suites to be part of a surveillance network, linked e.g. to a sensor planning system and a C4ISR center, and to be used in combination with future RPAS (Remotely Piloted Aircraft Systems) for supporting a more flexible dynamic configuration of RPAS payloads.

  18. Imaging of the interaction of cancer cells and the lymphatic system.

    PubMed

    Tran Cao, Hop S; McElroy, Michele; Kaushal, Sharmeela; Hoffman, Robert M; Bouvet, Michael

    2011-09-10

    A thorough understanding of the lymphatic system and its interaction with cancer cells is crucial to our ability to fight cancer metastasis. Efforts to study the lymphatic system had previously been limited by the inability to visualize the lymphatic system in vivo in real time. Fluorescence imaging can address these limitations and allow for visualization of lymphatic delivery and trafficking of cancer cells and potentially therapeutic agents as well. Here, we review recent articles in which antibody-fluorophore conjugates are used to label the lymphatic network and fluorescent proteins to label cancer cells in the evaluation of lymphatic delivery and imaging. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification.

    PubMed

    Sladojevic, Srdjan; Arsenovic, Marko; Anderla, Andras; Culibrk, Dubravko; Stefanovic, Darko

    2016-01-01

    The latest generation of convolutional neural networks (CNNs) has achieved impressive results in the field of image classification. This paper is concerned with a new approach to the development of plant disease recognition model, based on leaf image classification, by the use of deep convolutional networks. Novel way of training and the methodology used facilitate a quick and easy system implementation in practice. The developed model is able to recognize 13 different types of plant diseases out of healthy leaves, with the ability to distinguish plant leaves from their surroundings. According to our knowledge, this method for plant disease recognition has been proposed for the first time. All essential steps required for implementing this disease recognition model are fully described throughout the paper, starting from gathering images in order to create a database, assessed by agricultural experts. Caffe, a deep learning framework developed by Berkley Vision and Learning Centre, was used to perform the deep CNN training. The experimental results on the developed model achieved precision between 91% and 98%, for separate class tests, on average 96.3%.

  20. Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification

    PubMed Central

    Sladojevic, Srdjan; Arsenovic, Marko; Culibrk, Dubravko; Stefanovic, Darko

    2016-01-01

    The latest generation of convolutional neural networks (CNNs) has achieved impressive results in the field of image classification. This paper is concerned with a new approach to the development of plant disease recognition model, based on leaf image classification, by the use of deep convolutional networks. Novel way of training and the methodology used facilitate a quick and easy system implementation in practice. The developed model is able to recognize 13 different types of plant diseases out of healthy leaves, with the ability to distinguish plant leaves from their surroundings. According to our knowledge, this method for plant disease recognition has been proposed for the first time. All essential steps required for implementing this disease recognition model are fully described throughout the paper, starting from gathering images in order to create a database, assessed by agricultural experts. Caffe, a deep learning framework developed by Berkley Vision and Learning Centre, was used to perform the deep CNN training. The experimental results on the developed model achieved precision between 91% and 98%, for separate class tests, on average 96.3%. PMID:27418923

  1. Research on improving image recognition robustness by combining multiple features with associative memory

    NASA Astrophysics Data System (ADS)

    Guo, Dongwei; Wang, Zhe

    2018-05-01

    Convolutional neural networks (CNN) achieve great success in computer vision, it can learn hierarchical representation from raw pixels and has outstanding performance in various image recognition tasks [1]. However, CNN is easy to be fraudulent in terms of it is possible to produce images totally unrecognizable to human eyes that CNNs believe with near certainty are familiar objects. [2]. In this paper, an associative memory model based on multiple features is proposed. Within this model, feature extraction and classification are carried out by CNN, T-SNE and exponential bidirectional associative memory neural network (EBAM). The geometric features extracted from CNN and the digital features extracted from T-SNE are associated by EBAM. Thus we ensure the recognition of robustness by a comprehensive assessment of the two features. In our model, we can get only 8% error rate with fraudulent data. In systems that require a high safety factor or some key areas, strong robustness is extremely important, if we can ensure the image recognition robustness, network security will be greatly improved and the social production efficiency will be extremely enhanced.

  2. Improving the scalability of hyperspectral imaging applications on heterogeneous platforms using adaptive run-time data compression

    NASA Astrophysics Data System (ADS)

    Plaza, Antonio; Plaza, Javier; Paz, Abel

    2010-10-01

    Latest generation remote sensing instruments (called hyperspectral imagers) are now able to generate hundreds of images, corresponding to different wavelength channels, for the same area on the surface of the Earth. In previous work, we have reported that the scalability of parallel processing algorithms dealing with these high-dimensional data volumes is affected by the amount of data to be exchanged through the communication network of the system. However, large messages are common in hyperspectral imaging applications since processing algorithms are pixel-based, and each pixel vector to be exchanged through the communication network is made up of hundreds of spectral values. Thus, decreasing the amount of data to be exchanged could improve the scalability and parallel performance. In this paper, we propose a new framework based on intelligent utilization of wavelet-based data compression techniques for improving the scalability of a standard hyperspectral image processing chain on heterogeneous networks of workstations. This type of parallel platform is quickly becoming a standard in hyperspectral image processing due to the distributed nature of collected hyperspectral data as well as its flexibility and low cost. Our experimental results indicate that adaptive lossy compression can lead to improvements in the scalability of the hyperspectral processing chain without sacrificing analysis accuracy, even at sub-pixel precision levels.

  3. Smart unattended sensor networks with scene understanding capabilities

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2006-05-01

    Unattended sensor systems are new technologies that are supposed to provide enhanced situation awareness to military and law enforcement agencies. A network of such sensors cannot be very effective in field conditions only if it can transmit visual information to human operators or alert them on motion. In the real field conditions, events may happen in many nodes of a network simultaneously. But the real number of control personnel is always limited, and attention of human operators can be simply attracted to particular network nodes, while more dangerous threat may be unnoticed at the same time in the other nodes. Sensor networks would be more effective if equipped with a system that is similar to human vision in its abilities to understand visual information. Human vision uses for that a rough but wide peripheral system that tracks motions and regions of interests, narrow but precise foveal vision that analyzes and recognizes objects in the center of selected region of interest, and visual intelligence that provides scene and object contexts and resolves ambiguity and uncertainty in the visual information. Biologically-inspired Network-Symbolic models convert image information into an 'understandable' Network-Symbolic format, which is similar to relational knowledge models. The equivalent of interaction between peripheral and foveal systems in the network-symbolic system is achieved via interaction between Visual and Object Buffers and the top-level knowledge system.

  4. Motor system hyperconnectivity in juvenile myoclonic epilepsy: a cognitive functional magnetic resonance imaging study.

    PubMed

    Vollmar, Christian; O'Muircheartaigh, Jonathan; Barker, Gareth J; Symms, Mark R; Thompson, Pamela; Kumari, Veena; Duncan, John S; Janz, Dieter; Richardson, Mark P; Koepp, Matthias J

    2011-06-01

    Juvenile myoclonic epilepsy is the most frequent idiopathic generalized epilepsy syndrome. It is characterized by predominant myoclonic jerks of upper limbs, often provoked by cognitive activities, and typically responsive to treatment with sodium valproate. Neurophysiological, neuropsychological and imaging studies in juvenile myoclonic epilepsy have consistently pointed towards subtle abnormalities in the medial frontal lobes. Using functional magnetic resonance imaging with an executive frontal lobe paradigm, we investigated cortical activation patterns and interaction between cortical regions in 30 patients with juvenile myoclonic epilepsy and 26 healthy controls. With increasing cognitive demand, patients showed increasing coactivation of the primary motor cortex and supplementary motor area. This effect was stronger in patients still suffering from seizures, and was not seen in healthy controls. Patients with juvenile myoclonic epilepsy showed increased functional connectivity between the motor system and frontoparietal cognitive networks. Furthermore, we found impaired deactivation of the default mode network during cognitive tasks with persistent activation in medial frontal and central regions in patients. Coactivation in the motor cortex and supplementary motor area with increasing cognitive load and increased functional coupling between the motor system and cognitive networks provide an explanation how cognitive effort can cause myoclonic jerks in juvenile myoclonic epilepsy. The supplementary motor area represents the anatomical link between these two functional systems, and our findings may be the functional correlate of previously described structural abnormalities in the medial frontal lobe in juvenile myoclonic epilepsy.

  5. The impact of poverty on the development of brain networks

    PubMed Central

    Lipina, Sebastián J.; Posner, Michael I.

    2012-01-01

    Although the study of brain development in non-human animals is an old one, recent imaging methods have allowed non-invasive studies of the gray and white matter of the human brain over the lifespan. Classic animal studies show clearly that impoverished environments reduce cortical gray matter in relation to complex environments and cognitive and imaging studies in humans suggest which networks may be most influenced by poverty. Studies have been clear in showing the plasticity of many brain systems, but whether sensitivity to learning differs over the lifespan and for which networks is still unclear. A major task for current research is a successful integration of these methods to understand how development and learning shape the neural networks underlying achievements in literacy, numeracy, and attention. This paper seeks to foster further integration by reviewing the current state of knowledge relating brain changes to behavior and indicating possible future directions. PMID:22912613

  6. Automated Deep Learning-Based System to Identify Endothelial Cells Derived from Induced Pluripotent Stem Cells.

    PubMed

    Kusumoto, Dai; Lachmann, Mark; Kunihiro, Takeshi; Yuasa, Shinsuke; Kishino, Yoshikazu; Kimura, Mai; Katsuki, Toshiomi; Itoh, Shogo; Seki, Tomohisa; Fukuda, Keiichi

    2018-06-05

    Deep learning technology is rapidly advancing and is now used to solve complex problems. Here, we used deep learning in convolutional neural networks to establish an automated method to identify endothelial cells derived from induced pluripotent stem cells (iPSCs), without the need for immunostaining or lineage tracing. Networks were trained to predict whether phase-contrast images contain endothelial cells based on morphology only. Predictions were validated by comparison to immunofluorescence staining for CD31, a marker of endothelial cells. Method parameters were then automatically and iteratively optimized to increase prediction accuracy. We found that prediction accuracy was correlated with network depth and pixel size of images to be analyzed. Finally, K-fold cross-validation confirmed that optimized convolutional neural networks can identify endothelial cells with high performance, based only on morphology. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  7. Bilinear Convolutional Neural Networks for Fine-grained Visual Recognition.

    PubMed

    Lin, Tsung-Yu; RoyChowdhury, Aruni; Maji, Subhransu

    2017-07-04

    We present a simple and effective architecture for fine-grained recognition called Bilinear Convolutional Neural Networks (B-CNNs). These networks represent an image as a pooled outer product of features derived from two CNNs and capture localized feature interactions in a translationally invariant manner. B-CNNs are related to orderless texture representations built on deep features but can be trained in an end-to-end manner. Our most accurate model obtains 84.1%, 79.4%, 84.5% and 91.3% per-image accuracy on the Caltech-UCSD birds [66], NABirds [63], FGVC aircraft [42], and Stanford cars [33] dataset respectively and runs at 30 frames-per-second on a NVIDIA Titan X GPU. We then present a systematic analysis of these networks and show that (1) the bilinear features are highly redundant and can be reduced by an order of magnitude in size without significant loss in accuracy, (2) are also effective for other image classification tasks such as texture and scene recognition, and (3) can be trained from scratch on the ImageNet dataset offering consistent improvements over the baseline architecture. Finally, we present visualizations of these models on various datasets using top activations of neural units and gradient-based inversion techniques. The source code for the complete system is available at http://vis-www.cs.umass.edu/bcnn.

  8. Cingulate, Frontal and Parietal Cortical Dysfunction in Attention-Deficit/Hyperactivity Disorder

    PubMed Central

    Bush, George

    2011-01-01

    Functional and structural neuroimaging have identified abnormalities of the brain that are likely to contribute to the neuropathophysiology of attention-deficit/hyperactivity disorder (ADHD). In particular, hypofunction of the brain regions comprising the cingulo-frontal-parietal (CFP) cognitive-attention network have been consistently observed across studies. These are major components of neural systems that are relevant to ADHD, including cognitive/attention networks, motor systems and reward/feedback-based processing systems. Moreover, these areas interact with other brain circuits that have been implicated in ADHD, such as the “default mode” resting state network. ADHD imaging data related to CFP network dysfunction will be selectively highlighted here to help facilitate its integration with the other information presented in this special issue. Together, these reviews will help shed light on the neurobiology of ADHD. PMID:21489409

  9. [Advantages and Application Prospects of Deep Learning in Image Recognition and Bone Age Assessment].

    PubMed

    Hu, T H; Wan, L; Liu, T A; Wang, M W; Chen, T; Wang, Y H

    2017-12-01

    Deep learning and neural network models have been new research directions and hot issues in the fields of machine learning and artificial intelligence in recent years. Deep learning has made a breakthrough in the applications of image and speech recognitions, and also has been extensively used in the fields of face recognition and information retrieval because of its special superiority. Bone X-ray images express different variations in black-white-gray gradations, which have image features of black and white contrasts and level differences. Based on these advantages of deep learning in image recognition, we combine it with the research of bone age assessment to provide basic datum for constructing a forensic automatic system of bone age assessment. This paper reviews the basic concept and network architectures of deep learning, and describes its recent research progress on image recognition in different research fields at home and abroad, and explores its advantages and application prospects in bone age assessment. Copyright© by the Editorial Department of Journal of Forensic Medicine.

  10. Image Classification Using Biomimetic Pattern Recognition with Convolutional Neural Networks Features

    PubMed Central

    Huo, Guanying

    2017-01-01

    As a typical deep-learning model, Convolutional Neural Networks (CNNs) can be exploited to automatically extract features from images using the hierarchical structure inspired by mammalian visual system. For image classification tasks, traditional CNN models employ the softmax function for classification. However, owing to the limited capacity of the softmax function, there are some shortcomings of traditional CNN models in image classification. To deal with this problem, a new method combining Biomimetic Pattern Recognition (BPR) with CNNs is proposed for image classification. BPR performs class recognition by a union of geometrical cover sets in a high-dimensional feature space and therefore can overcome some disadvantages of traditional pattern recognition. The proposed method is evaluated on three famous image classification benchmarks, that is, MNIST, AR, and CIFAR-10. The classification accuracies of the proposed method for the three datasets are 99.01%, 98.40%, and 87.11%, respectively, which are much higher in comparison with the other four methods in most cases. PMID:28316614

  11. Exploring Solar System Origins With The Desert Fireball Network

    NASA Astrophysics Data System (ADS)

    Day, B. H.; Bland, P.

    2016-12-01

    Fireball camera networks are designed to recover meteorites with orbits. A geological context is a prerequisite for understanding terrestrial rocks. An improved dynamical context would benefit our understanding of extraterrestrial geology. A dozen projects - professional and amateur - have pursued this goal over the years. The effort has yielded 10 meteorites with orbits. Why so few? All these projects were in the temperate zone of the northern hemisphere: areas where meteorite recovery is marginal. Deserts are one of the few places on Earth where field searches for meteorites can be mounted with a realistic chance of success. This was the driver behind the Desert Fireball Network. The Desert Fireball Network (DFN) uses automated observatories across Australia to triangulate trajectories of meteorites entering the atmosphere, determine pre-entry orbits, and pinpoint their fall positions. Each observatory is an autonomous intelligent imaging system, taking 1000×36Megapixel all-sky images throughout the night, using neural network algorithms to recognise events. They are capable of operating for 12 months in a harsh environment, and store all imagery collected. We developed a completely automated software pipeline for data reduction, and built a supercomputer database for storage, allowing us to process our entire archive. We successfully recovered a meteorite from Lake Eyre on 31st December 2015, using this pipeline. By February 2016 we had reduced our complete fireball dataset, deriving precise orbits for >350 events: a dataset that provides a unique window on the dynamics of material in the inner solar system. The DFN currently stands at 50 stations distributed across the Australian continent, covering an area of 2.5 million km2. The fireball and meteorite orbital data that it can provide will deliver a new dynamical window on the inner solar system, and new insights into solar system origins. Working with DFN's partners at NASA's Solar System Exploration Research Virtual Institute, the team is now working to expand the network beyond Australia to locations around the world.

  12. A New Experiment on Bengali Character Recognition

    NASA Astrophysics Data System (ADS)

    Barman, Sumana; Bhattacharyya, Debnath; Jeon, Seung-Whan; Kim, Tai-Hoon; Kim, Haeng-Kon

    This paper presents a method to use View based approach in Bangla Optical Character Recognition (OCR) system providing reduced data set to the ANN classification engine rather than the traditional OCR methods. It describes how Bangla characters are processed, trained and then recognized with the use of a Backpropagation Artificial neural network. This is the first published account of using a segmentation-free optical character recognition system for Bangla using a view based approach. The methodology presented here assumes that the OCR pre-processor has presented the input images to the classification engine described here. The size and the font face used to render the characters are also significant in both training and classification. The images are first converted into greyscale and then to binary images; these images are then scaled to a fit a pre-determined area with a fixed but significant number of pixels. The feature vectors are then formed extracting the characteristics points, which in this case is simply a series of 0s and 1s of fixed length. Finally, an artificial neural network is chosen for the training and classification process.

  13. Understanding Physiological and Degenerative Natural Vision Mechanisms to Define Contrast and Contour Operators

    PubMed Central

    Demongeot, Jacques; Fouquet, Yannick; Tayyab, Muhammad; Vuillerme, Nicolas

    2009-01-01

    Background Dynamical systems like neural networks based on lateral inhibition have a large field of applications in image processing, robotics and morphogenesis modeling. In this paper, we will propose some examples of dynamical flows used in image contrasting and contouring. Methodology First we present the physiological basis of the retina function by showing the role of the lateral inhibition in the optical illusions and pathologic processes generation. Then, based on these biological considerations about the real vision mechanisms, we study an enhancement method for contrasting medical images, using either a discrete neural network approach, or its continuous version, i.e. a non-isotropic diffusion reaction partial differential system. Following this, we introduce other continuous operators based on similar biomimetic approaches: a chemotactic contrasting method, a viability contouring algorithm and an attentional focus operator. Then, we introduce the new notion of mixed potential Hamiltonian flows; we compare it with the watershed method and we use it for contouring. Conclusions We conclude by showing the utility of these biomimetic methods with some examples of application in medical imaging and computed assisted surgery. PMID:19547712

  14. Neural network and its application to CT imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nikravesh, M.; Kovscek, A.R.; Patzek, T.W.

    We present an integrated approach to imaging the progress of air displacement by spontaneous imbibition of oil into sandstone. We combine Computerized Tomography (CT) scanning and neural network image processing. The main aspects of our approach are (I) visualization of the distribution of oil and air saturation by CT, (II) interpretation of CT scans using neural networks, and (III) reconstruction of 3-D images of oil saturation from the CT scans with a neural network model. Excellent agreement between the actual images and the neural network predictions is found.

  15. Large-scale topology and the default mode network in the mouse connectome

    PubMed Central

    Stafford, James M.; Jarrett, Benjamin R.; Miranda-Dominguez, Oscar; Mills, Brian D.; Cain, Nicholas; Mihalas, Stefan; Lahvis, Garet P.; Lattal, K. Matthew; Mitchell, Suzanne H.; David, Stephen V.; Fryer, John D.; Nigg, Joel T.; Fair, Damien A.

    2014-01-01

    Noninvasive functional imaging holds great promise for serving as a translational bridge between human and animal models of various neurological and psychiatric disorders. However, despite a depth of knowledge of the cellular and molecular underpinnings of atypical processes in mouse models, little is known about the large-scale functional architecture measured by functional brain imaging, limiting translation to human conditions. Here, we provide a robust processing pipeline to generate high-resolution, whole-brain resting-state functional connectivity MRI (rs-fcMRI) images in the mouse. Using a mesoscale structural connectome (i.e., an anterograde tracer mapping of axonal projections across the mouse CNS), we show that rs-fcMRI in the mouse has strong structural underpinnings, validating our procedures. We next directly show that large-scale network properties previously identified in primates are present in rodents, although they differ in several ways. Last, we examine the existence of the so-called default mode network (DMN)—a distributed functional brain system identified in primates as being highly important for social cognition and overall brain function and atypically functionally connected across a multitude of disorders. We show the presence of a potential DMN in the mouse brain both structurally and functionally. Together, these studies confirm the presence of basic network properties and functional networks of high translational importance in structural and functional systems in the mouse brain. This work clears the way for an important bridge measurement between human and rodent models, enabling us to make stronger conclusions about how regionally specific cellular and molecular manipulations in mice relate back to humans. PMID:25512496

  16. Comparison of two reconfigurable N×N interconnects for a recurrent neural network

    NASA Astrophysics Data System (ADS)

    Berger, Christoph; Collings, Neil; Pourzand, Ali R.; Volkel, Reinnard

    1996-11-01

    Two different methods of pattern replication (conventional and interlaced fan-out) have been investigated and experimentally tested in a reconfigurable 5X5 optical interconnect. Similar alignment problems due to imaging errors (field curvature) were observed in both systems. We conclude that of the two methods the interlaced fan-out is better suited to avoid these imaging errors, to reduce system size and to implement an optical feedback loop.

  17. Computer-aided diagnostic detection system of venous beading in retinal images

    NASA Astrophysics Data System (ADS)

    Yang, Ching-Wen; Ma, DyeJyun; Chao, ShuennChing; Wang, ChuinMu; Wen, Chia-Hsien; Lo, ChienShun; Chung, Pau-Choo; Chang, Chein-I.

    2000-05-01

    The detection of venous beading in retinal images provides an early sign of diabetic retinopathy and plays an important role as a preprocessing step in diagnosing ocular diseases. We present a computer-aided diagnostic system to automatically detect venous beading of blood vessels. It comprises of two modules, referred to as the blood vessel extraction module and the venus beading detection module. The former uses a bell-shaped Gaussian kernel with 12 azimuths to extract blood vessels while the latter applies a neural network-based shape cognitron to detect venous beading among the extracted blood vessels for diagnosis. Both modules are fully computer-automated. To evaluate the proposed system, 61 retinal images (32 beaded and 29 normal images) are used for performance evaluation.

  18. The North Alabama Severe Thunderstorm Observations, Research, and Monitoring Network (STORMnet)

    NASA Technical Reports Server (NTRS)

    Goodman, S. J.; Blakeslee, R.; Christian, H.; Boccippio, D.; Koshak, W.; Bailey, J.; Hall, J.; Bateman, M.; McCaul, E.; Buechler, D.; hide

    2002-01-01

    The Severe Thunderstorm Observations, Research, and Monitoring network (STORMnet) became operational in 2001 as a test bed to infuse new science and technologies into the severe and hazardous weather forecasting and warning process. STORMnet is collaboration among NASA scientists, National Weather Service (NWS) forecasters, emergency managers and other partners. STORMnet integrates total lightning observations from a ten-station 3-D VHF regional lightning mapping array, the National Lightning Detection Network (NLDN), real-time regional NEXRAD Doppler radar, satellite visible and infrared imagers, and a mobile atmospheric profiling system to characterize storms and their evolution. The storm characteristics and life-cycle trending are accomplished in real-time through the second generation Lightning Imaging Sensor Demonstration and Display (LISDAD II), a distributed processing system with a JAVA-based display application that allows anyone, anywhere to track individual storm histories within the Tennessee Valley region of north Alabama and Tennessee, a region of the southeastern U.S. well known for abundant severe weather.

  19. A Dynamic Image Quality Evaluation of Videofluoroscopy Images: Considerations for Telepractice Applications.

    PubMed

    Burns, Clare L; Keir, Benjamin; Ward, Elizabeth C; Hill, Anne J; Farrell, Anna; Phillips, Nick; Porter, Linda

    2015-08-01

    High-quality fluoroscopy images are required for accurate interpretation of videofluoroscopic swallow studies (VFSS) by speech pathologists and radiologists. Consequently, integral to developing any system to conduct VFSS remotely via telepractice is ensuring that the quality of the VFSS images transferred via the telepractice system is optimized. This study evaluates the extent of change observed in image quality when videofluoroscopic images are transmitted from a digital fluoroscopy system to (a) current clinical equipment (KayPentax Digital Swallowing Workstation, and b) four different telepractice system configurations. The telepractice system configurations consisted of either a local C20 or C60 Cisco TelePresence System (codec unit) connected to the digital fluoroscopy system and linked to a second remote C20 or C60 Cisco TelePresence System via a network running at speeds of either 2, 4 or 6 megabits per second (Mbit/s). Image quality was tested using the NEMA XR 21 Phantom, and results demonstrated some loss in spatial resolution, low contrast detectability and temporal resolution for all transferred images when compared to the fluoroscopy source. When using higher capacity codec units and/or the highest bandwidths to support data transmission, image quality transmitted through the telepractice system was found to be comparable if not better than the current clinical system. This study confirms that telepractice systems can be designed to support fluoroscopy image transfer and highlights important considerations when developing telepractice systems for VFSS analysis to ensure high-quality radiological image reproduction.

  20. Center for Neural Engineering: applications of pulse-coupled neural networks

    NASA Astrophysics Data System (ADS)

    Malkani, Mohan; Bodruzzaman, Mohammad; Johnson, John L.; Davis, Joel

    1999-03-01

    Pulsed-Coupled Neural Network (PCNN) is an oscillatory model neural network where grouping of cells and grouping among the groups that form the output time series (number of cells that fires in each input presentation also called `icon'). This is based on the synchronicity of oscillations. Recent work by Johnson and others demonstrated the functional capabilities of networks containing such elements for invariant feature extraction using intensity maps. PCNN thus presents itself as a more biologically plausible model with solid functional potential. This paper will present the summary of several projects and their results where we successfully applied PCNN. In project one, the PCNN was applied for object recognition and classification through a robotic vision system. The features (icons) generated by the PCNN were then fed into a feedforward neural network for classification. In project two, we developed techniques for sensory data fusion. The PCNN algorithm was implemented and tested on a B14 mobile robot. The PCNN-based features were extracted from the images taken from the robot vision system and used in conjunction with the map generated by data fusion of the sonar and wheel encoder data for the navigation of the mobile robot. In our third project, we applied the PCNN for speaker recognition. The spectrogram image of speech signals are fed into the PCNN to produce invariant feature icons which are then fed into a feedforward neural network for speaker identification.

Top