Sample records for image control system

  1. High-resolution, continuous field-of-view (FOV), non-rotating imaging system

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance L. (Inventor); Stirbl, Robert C. (Inventor); Aghazarian, Hrand (Inventor); Padgett, Curtis W. (Inventor)

    2010-01-01

    A high resolution CMOS imaging system especially suitable for use in a periscope head. The imaging system includes a sensor head for scene acquisition, and a control apparatus inclusive of distributed processors and software for device-control, data handling, and display. The sensor head encloses a combination of wide field-of-view CMOS imagers and narrow field-of-view CMOS imagers. Each bank of imagers is controlled by a dedicated processing module in order to handle information flow and image analysis of the outputs of the camera system. The imaging system also includes automated or manually controlled display system and software for providing an interactive graphical user interface (GUI) that displays a full 360-degree field of view and allows the user or automated ATR system to select regions for higher resolution inspection.

  2. Autofocus system and autofocus method for focusing on a surface

    DOEpatents

    O'Neill, Mary Morabito

    2017-05-23

    An autofocus system includes an imaging device, a lens system and a focus control actuator that is configured to change a focus position of the imaging device in relation to a stage. The electronic control unit is configured to control the focus control actuator to a plurality of predetermined focus positions, and activate the imaging device to obtain an image at predetermined positions and then apply a spatial filter to the obtained images. This generates a filtered image for the obtained images. The control unit determines a focus score for the filtered images such that the focus score corresponds to a degree of focus in the obtained images. The control unit identifies a best focus position by comparing the focus score of the filtered images, and controls the focus control actuator to the best focus position corresponding to the highest focus score.

  3. An integrated compact airborne multispectral imaging system using embedded computer

    NASA Astrophysics Data System (ADS)

    Zhang, Yuedong; Wang, Li; Zhang, Xuguo

    2015-08-01

    An integrated compact airborne multispectral imaging system using embedded computer based control system was developed for small aircraft multispectral imaging application. The multispectral imaging system integrates CMOS camera, filter wheel with eight filters, two-axis stabilized platform, miniature POS (position and orientation system) and embedded computer. The embedded computer has excellent universality and expansibility, and has advantages in volume and weight for airborne platform, so it can meet the requirements of control system of the integrated airborne multispectral imaging system. The embedded computer controls the camera parameters setting, filter wheel and stabilized platform working, image and POS data acquisition, and stores the image and data. The airborne multispectral imaging system can connect peripheral device use the ports of the embedded computer, so the system operation and the stored image data management are easy. This airborne multispectral imaging system has advantages of small volume, multi-function, and good expansibility. The imaging experiment results show that this system has potential for multispectral remote sensing in applications such as resource investigation and environmental monitoring.

  4. Design of low noise imaging system

    NASA Astrophysics Data System (ADS)

    Hu, Bo; Chen, Xiaolai

    2017-10-01

    In order to meet the needs of engineering applications for low noise imaging system under the mode of global shutter, a complete imaging system is designed based on the SCMOS (Scientific CMOS) image sensor CIS2521F. The paper introduces hardware circuit and software system design. Based on the analysis of key indexes and technologies about the imaging system, the paper makes chips selection and decides SCMOS + FPGA+ DDRII+ Camera Link as processing architecture. Then it introduces the entire system workflow and power supply and distribution unit design. As for the software system, which consists of the SCMOS control module, image acquisition module, data cache control module and transmission control module, the paper designs in Verilog language and drives it to work properly based on Xilinx FPGA. The imaging experimental results show that the imaging system exhibits a 2560*2160 pixel resolution, has a maximum frame frequency of 50 fps. The imaging quality of the system satisfies the requirement of the index.

  5. A head movement image (HMI)-controlled computer mouse for people with disabilities.

    PubMed

    Chen, Yu-Luen; Chen, Weoi-Luen; Kuo, Te-Son; Lai, Jin-Shin

    2003-02-04

    This study proposes image processing and microprocessor technology for use in developing a head movement image (HMI)-controlled computer mouse system for the spinal cord injured (SCI). The system controls the movement and direction of the mouse cursor by capturing head movement images using a marker installed on the user's headset. In the clinical trial, this new mouse system was compared with an infrared-controlled mouse system on various tasks with nine subjects with SCI. The results were favourable to the new mouse system. The differences between the new mouse system and the infrared-controlled mouse were reaching statistical significance in each of the test situations (p<0.05). The HMI-controlled computer mouse improves the input speed. People with disabilities need only wear the headset and move their heads to freely control the movement of the mouse cursor.

  6. Standardisation of DNA quantitation by image analysis: quality control of instrumentation.

    PubMed

    Puech, M; Giroud, F

    1999-05-01

    DNA image analysis is frequently performed in clinical practice as a prognostic tool and to improve diagnosis. The precision of prognosis and diagnosis depends on the accuracy of analysis and particularly on the quality of image analysis systems. It has been reported that image analysis systems used for DNA quantification differ widely in their characteristics (Thunissen et al.: Cytometry 27: 21-25, 1997). This induces inter-laboratory variations when the same sample is analysed in different laboratories. In microscopic image analysis, the principal instrumentation errors arise from the optical and electronic parts of systems. They bring about problems of instability, non-linearity, and shading and glare phenomena. The aim of this study is to establish tools and standardised quality control procedures for microscopic image analysis systems. Specific reference standard slides have been developed to control instability, non-linearity, shading and glare phenomena and segmentation efficiency. Some systems have been controlled with these tools and these quality control procedures. Interpretation criteria and accuracy limits of these quality control procedures are proposed according to the conclusions of a European project called PRESS project (Prototype Reference Standard Slide). Beyond these limits, tested image analysis systems are not qualified to realise precise DNA analysis. The different procedures presented in this work determine if an image analysis system is qualified to deliver sufficiently precise DNA measurements for cancer case analysis. If the controlled systems are beyond the defined limits, some recommendations are given to find a solution to the problem.

  7. Designing a stable feedback control system for blind image deconvolution.

    PubMed

    Cheng, Shichao; Liu, Risheng; Fan, Xin; Luo, Zhongxuan

    2018-05-01

    Blind image deconvolution is one of the main low-level vision problems with wide applications. Many previous works manually design regularization to simultaneously estimate the latent sharp image and the blur kernel under maximum a posterior framework. However, it has been demonstrated that such joint estimation strategies may lead to the undesired trivial solution. In this paper, we present a novel perspective, using a stable feedback control system, to simulate the latent sharp image propagation. The controller of our system consists of regularization and guidance, which decide the sparsity and sharp features of latent image, respectively. Furthermore, the formational model of blind image is introduced into the feedback process to avoid the image restoration deviating from the stable point. The stability analysis of the system indicates the latent image propagation in blind deconvolution task can be efficiently estimated and controlled by cues and priors. Thus the kernel estimation used for image restoration becomes more precision. Experimental results show that our system is effective on image propagation, and can perform favorably against the state-of-the-art blind image deconvolution methods on different benchmark image sets and special blurred images. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Image sensor system with bio-inspired efficient coding and adaptation.

    PubMed

    Okuno, Hirotsugu; Yagi, Tetsuya

    2012-08-01

    We designed and implemented an image sensor system equipped with three bio-inspired coding and adaptation strategies: logarithmic transform, local average subtraction, and feedback gain control. The system comprises a field-programmable gate array (FPGA), a resistive network, and active pixel sensors (APS), whose light intensity-voltage characteristics are controllable. The system employs multiple time-varying reset voltage signals for APS in order to realize multiple logarithmic intensity-voltage characteristics, which are controlled so that the entropy of the output image is maximized. The system also employs local average subtraction and gain control in order to obtain images with an appropriate contrast. The local average is calculated by the resistive network instantaneously. The designed system was successfully used to obtain appropriate images of objects that were subjected to large changes in illumination.

  9. Developing stereo image based robot control system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suprijadi,; Pambudi, I. R.; Woran, M.

    Application of image processing is developed in various field and purposes. In the last decade, image based system increase rapidly with the increasing of hardware and microprocessor performance. Many fields of science and technology were used this methods especially in medicine and instrumentation. New technique on stereovision to give a 3-dimension image or movie is very interesting, but not many applications in control system. Stereo image has pixel disparity information that is not existed in single image. In this research, we proposed a new method in wheel robot control system using stereovision. The result shows robot automatically moves based onmore » stereovision captures.« less

  10. The quantitative control and matching of an optical false color composite imaging system

    NASA Astrophysics Data System (ADS)

    Zhou, Chengxian; Dai, Zixin; Pan, Xizhe; Li, Yinxi

    1993-10-01

    Design of an imaging system for optical false color composite (OFCC) capable of high-precision density-exposure time control and color balance is presented. The system provides high quality FCC image data that can be analyzed using a quantitative calculation method. The quality requirement to each part of the image generation system is defined, and the distribution of satellite remote sensing image information is analyzed. The proposed technology makes it possible to present the remote sensing image data more effectively and accurately.

  11. Imaging system design and image interpolation based on CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Li, Yu-feng; Liang, Fei; Guo, Rui

    2009-11-01

    An image acquisition system is introduced, which consists of a color CMOS image sensor (OV9620), SRAM (CY62148), CPLD (EPM7128AE) and DSP (TMS320VC5509A). The CPLD implements the logic and timing control to the system. SRAM stores the image data, and DSP controls the image acquisition system through the SCCB (Omni Vision Serial Camera Control Bus). The timing sequence of the CMOS image sensor OV9620 is analyzed. The imaging part and the high speed image data memory unit are designed. The hardware and software design of the image acquisition and processing system is given. CMOS digital cameras use color filter arrays to sample different spectral components, such as red, green, and blue. At the location of each pixel only one color sample is taken, and the other colors must be interpolated from neighboring samples. We use the edge-oriented adaptive interpolation algorithm for the edge pixels and bilinear interpolation algorithm for the non-edge pixels to improve the visual quality of the interpolated images. This method can get high processing speed, decrease the computational complexity, and effectively preserve the image edges.

  12. Hologlyphics: volumetric image synthesis performance system

    NASA Astrophysics Data System (ADS)

    Funk, Walter

    2008-02-01

    This paper describes a novel volumetric image synthesis system and artistic technique, which generate moving volumetric images in real-time, integrated with music. The system, called the Hologlyphic Funkalizer, is performance based, wherein the images and sound are controlled by a live performer, for the purposes of entertaining a live audience and creating a performance art form unique to volumetric and autostereoscopic images. While currently configured for a specific parallax barrier display, the Hologlyphic Funkalizer's architecture is completely adaptable to various volumetric and autostereoscopic display technologies. Sound is distributed through a multi-channel audio system; currently a quadraphonic speaker setup is implemented. The system controls volumetric image synthesis, production of music and spatial sound via acoustic analysis and human gestural control, using a dedicated control panel, motion sensors, and multiple musical keyboards. Music can be produced by external acoustic instruments, pre-recorded sounds or custom audio synthesis integrated with the volumetric image synthesis. Aspects of the sound can control the evolution of images and visa versa. Sounds can be associated and interact with images, for example voice synthesis can be combined with an animated volumetric mouth, where nuances of generated speech modulate the mouth's expressiveness. Different images can be sent to up to 4 separate displays. The system applies many novel volumetric special effects, and extends several film and video special effects into the volumetric realm. Extensive and various content has been developed and shown to live audiences by a live performer. Real world applications will be explored, with feedback on the human factors.

  13. Method and system to synchronize acoustic therapy with ultrasound imaging

    NASA Technical Reports Server (NTRS)

    Hossack, James (Inventor); Owen, Neil (Inventor); Bailey, Michael R. (Inventor)

    2009-01-01

    Interference in ultrasound imaging when used in connection with high intensity focused ultrasound (HIFU) is avoided by employing a synchronization signal to control the HIFU signal. Unless the timing of the HIFU transducer is controlled, its output will substantially overwhelm the signal produced by ultrasound imaging system and obscure the image it produces. The synchronization signal employed to control the HIFU transducer is obtained without requiring modification of the ultrasound imaging system. Signals corresponding to scattered ultrasound imaging waves are collected using either the HIFU transducer or a dedicated receiver. A synchronization processor manipulates the scattered ultrasound imaging signals to achieve the synchronization signal, which is then used to control the HIFU bursts so as to substantially reduce or eliminate HIFU interference in the ultrasound image. The synchronization processor can alternatively be implemented using a computing device or an application-specific circuit.

  14. Analyses of requirements for computer control and data processing experiment subsystems. Volume 2: ATM experiment S-056 image data processing system software development

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The IDAPS (Image Data Processing System) is a user-oriented, computer-based, language and control system, which provides a framework or standard for implementing image data processing applications, simplifies set-up of image processing runs so that the system may be used without a working knowledge of computer programming or operation, streamlines operation of the image processing facility, and allows multiple applications to be run in sequence without operator interaction. The control system loads the operators, interprets the input, constructs the necessary parameters for each application, and cells the application. The overlay feature of the IBSYS loader (IBLDR) provides the means of running multiple operators which would otherwise overflow core storage.

  15. Developing an interactive teleradiology system for SARS diagnosis

    NASA Astrophysics Data System (ADS)

    Sun, Jianyong; Zhang, Jianguo; Zhuang, Jun; Chen, Xiaomeng; Yong, Yuanyuan; Tan, Yongqiang; Chen, Liu; Lian, Ping; Meng, Lili; Huang, H. K.

    2004-04-01

    Severe acute respiratory syndrome (SARS) is a respiratory illness that had been reported in Asia, North America, and Europe in last spring. Most of the China cases of SARS have occurred by infection in hospitals or among travelers. To protect the physicians, experts and nurses from the SARS during the diagnosis and treatment procedures, the infection control mechanisms were built in SARS hospitals. We built a Web-based interactive teleradiology system to assist the radiologists and physicians both in side and out side control area to make image diagnosis. The system consists of three major components: DICOM gateway (GW), Web-based image repository server (Server), and Web-based DICOM viewer (Viewer). This system was installed and integrated with CR, CT and the hospital information system (HIS) in Shanghai Xinhua hospital to provide image-based ePR functions for SARS consultation between the radiologists, physicians and experts inside and out side control area. The both users inside and out side the control area can use the system to process and manipulate the DICOM images interactively, and the system provide the remote control mechanism to synchronize their operations on images and display.

  16. Face Recognition for Access Control Systems Combining Image-Difference Features Based on a Probabilistic Model

    NASA Astrophysics Data System (ADS)

    Miwa, Shotaro; Kage, Hiroshi; Hirai, Takashi; Sumi, Kazuhiko

    We propose a probabilistic face recognition algorithm for Access Control System(ACS)s. Comparing with existing ACSs using low cost IC-cards, face recognition has advantages in usability and security that it doesn't require people to hold cards over scanners and doesn't accept imposters with authorized cards. Therefore face recognition attracts more interests in security markets than IC-cards. But in security markets where low cost ACSs exist, price competition is important, and there is a limitation on the quality of available cameras and image control. Therefore ACSs using face recognition are required to handle much lower quality images, such as defocused and poor gain-controlled images than high security systems, such as immigration control. To tackle with such image quality problems we developed a face recognition algorithm based on a probabilistic model which combines a variety of image-difference features trained by Real AdaBoost with their prior probability distributions. It enables to evaluate and utilize only reliable features among trained ones during each authentication, and achieve high recognition performance rates. The field evaluation using a pseudo Access Control System installed in our office shows that the proposed system achieves a constant high recognition performance rate independent on face image qualities, that is about four times lower EER (Equal Error Rate) under a variety of image conditions than one without any prior probability distributions. On the other hand using image difference features without any prior probabilities are sensitive to image qualities. We also evaluated PCA, and it has worse, but constant performance rates because of its general optimization on overall data. Comparing with PCA, Real AdaBoost without any prior distribution performs twice better under good image conditions, but degrades to a performance as good as PCA under poor image conditions.

  17. Sensing system for detection and control of deposition on pendant tubes in recovery and power boilers

    DOEpatents

    Kychakoff, George [Maple Valley, WA; Afromowitz, Martin A [Mercer Island, WA; Hogle, Richard E [Olympia, WA

    2008-10-14

    A system for detection and control of deposition on pendant tubes in recovery and power boilers includes one or more deposit monitoring sensors operating in infrared regions of about 4 or 8.7 microns and directly producing images of the interior of the boiler, or producing feeding signals to a data processing system for information to enable a distributed control system by which the boilers are operated to operate said boilers more efficiently. The data processing system includes an image pre-processing circuit in which a 2-D image formed by the video data input is captured, and includes a low pass filter for performing noise filtering of said video input. It also includes an image compensation system for array compensation to correct for pixel variation and dead cells, etc., and for correcting geometric distortion. An image segmentation module receives a cleaned image from the image pre-processing circuit for separating the image of the recovery boiler interior into background, pendant tubes, and deposition. It also accomplishes thresholding/clustering on gray scale/texture and makes morphological transforms to smooth regions, and identifies regions by connected components. An image-understanding unit receives a segmented image sent from the image segmentation module and matches derived regions to a 3-D model of said boiler. It derives a 3-D structure the deposition on pendant tubes in the boiler and provides the information about deposits to the plant distributed control system for more efficient operation of the plant pendant tube cleaning and operating systems.

  18. Robotic Vehicle Communications Interoperability

    DTIC Science & Technology

    1988-08-01

    starter (cold start) X X Fire suppression X Fording control X Fuel control X Fuel tank selector X Garage toggle X Gear selector X X X X Hazard warning...optic Sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor control...optic sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor

  19. Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback

    PubMed Central

    Liu, Haoting; Zhou, Qianxiang; Yang, Jin; Jiang, Ting; Liu, Zhizhen; Li, Jie

    2017-01-01

    An imaging sensor-based intelligent Light Emitting Diode (LED) lighting system for desk use is proposed. In contrast to the traditional intelligent lighting system, such as the photosensitive resistance sensor-based or the infrared sensor-based system, the imaging sensor can realize a finer perception of the environmental light; thus it can guide a more precise lighting control. Before this system works, first lots of typical imaging lighting data of the desk application are accumulated. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs) are defined and assessed for these datasets above. Then the cluster benchmarks of these objective LEEMs can be obtained. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image using a wearable camera. Then it computes the objective LEEMs of the captured image and compares them with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get a kind of optimal lighting effect. Many experiment results have shown the proposed system can tune the LED lamp automatically according to environment luminance changes. PMID:28208781

  20. Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback.

    PubMed

    Liu, Haoting; Zhou, Qianxiang; Yang, Jin; Jiang, Ting; Liu, Zhizhen; Li, Jie

    2017-02-09

    An imaging sensor-based intelligent Light Emitting Diode (LED) lighting system for desk use is proposed. In contrast to the traditional intelligent lighting system, such as the photosensitive resistance sensor-based or the infrared sensor-based system, the imaging sensor can realize a finer perception of the environmental light; thus it can guide a more precise lighting control. Before this system works, first lots of typical imaging lighting data of the desk application are accumulated. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs) are defined and assessed for these datasets above. Then the cluster benchmarks of these objective LEEMs can be obtained. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image using a wearable camera. Then it computes the objective LEEMs of the captured image and compares them with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get a kind of optimal lighting effect. Many experiment results have shown the proposed system can tune the LED lamp automatically according to environment luminance changes.

  1. Automated system for acquisition and image processing for the control and monitoring boned nopal

    NASA Astrophysics Data System (ADS)

    Luevano, E.; de Posada, E.; Arronte, M.; Ponce, L.; Flores, T.

    2013-11-01

    This paper describes the design and fabrication of a system for acquisition and image processing to control the removal of thorns nopal vegetable (Opuntia ficus indica) in an automated machine that uses pulses of a laser of Nd: YAG. The areolas, areas where thorns grow on the bark of the Nopal, are located applying segmentation algorithms to the images obtained by a CCD. Once the position of the areolas is known, coordinates are sent to a motors system that controls the laser to interact with all areolas and remove the thorns of the nopal. The electronic system comprises a video decoder, memory for image and software storage, and digital signal processor for system control. The firmware programmed tasks on acquisition, preprocessing, segmentation, recognition and interpretation of the areolas. This system achievement identifying areolas and generating table of coordinates of them, which will be send the motor galvo system that controls the laser for removal

  2. Use of anomolous thermal imaging effects for multi-mode systems control during crystal growth

    NASA Technical Reports Server (NTRS)

    Wargo, Michael J.

    1989-01-01

    Real time image processing techniques, combined with multitasking computational capabilities are used to establish thermal imaging as a multimode sensor for systems control during crystal growth. Whereas certain regions of the high temperature scene are presently unusable for quantitative determination of temperature, the anomalous information thus obtained is found to serve as a potentially low noise source of other important systems control output. Using this approach, the light emission/reflection characteristics of the crystal, meniscus and melt system are used to infer the crystal diameter and a linear regression algorithm is employed to determine the local diameter trend. This data is utilized as input for closed loop control of crystal shape. No performance penalty in thermal imaging speed is paid for this added functionality. Approach to secondary (diameter) sensor design and systems control structure is discussed. Preliminary experimental results are presented.

  3. Robust algebraic image enhancement for intelligent control systems

    NASA Technical Reports Server (NTRS)

    Lerner, Bao-Ting; Morrelli, Michael

    1993-01-01

    Robust vision capability for intelligent control systems has been an elusive goal in image processing. The computationally intensive techniques a necessary for conventional image processing make real-time applications, such as object tracking and collision avoidance difficult. In order to endow an intelligent control system with the needed vision robustness, an adequate image enhancement subsystem capable of compensating for the wide variety of real-world degradations, must exist between the image capturing and the object recognition subsystems. This enhancement stage must be adaptive and must operate with consistency in the presence of both statistical and shape-based noise. To deal with this problem, we have developed an innovative algebraic approach which provides a sound mathematical framework for image representation and manipulation. Our image model provides a natural platform from which to pursue dynamic scene analysis, and its incorporation into a vision system would serve as the front-end to an intelligent control system. We have developed a unique polynomial representation of gray level imagery and applied this representation to develop polynomial operators on complex gray level scenes. This approach is highly advantageous since polynomials can be manipulated very easily, and are readily understood, thus providing a very convenient environment for image processing. Our model presents a highly structured and compact algebraic representation of grey-level images which can be viewed as fuzzy sets.

  4. An arc control and protection system for the JET lower hybrid antenna based on an imaging system.

    PubMed

    Figueiredo, J; Mailloux, J; Kirov, K; Kinna, D; Stamp, M; Devaux, S; Arnoux, G; Edwards, J S; Stephen, A V; McCullen, P; Hogben, C

    2014-11-01

    Arcs are the potentially most dangerous events related to Lower Hybrid (LH) antenna operation. If left uncontrolled they can produce damage and cause plasma disruption by impurity influx. To address this issue an arc real time control and protection imaging system for the Joint European Torus (JET) LH antenna has been implemented. The LH system is one of the additional heating systems at JET. It comprises 24 microwave generators (klystrons, operating at 3.7 GHz) providing up to 5 MW of heating and current drive to the JET plasma. This is done through an antenna composed of an array of waveguides facing the plasma. The protection system presented here is based primarily on an imaging arc detection and real time control system. It has adapted the ITER like wall hotspot protection system using an identical CCD camera and real time image processing unit. A filter has been installed to avoid saturation and spurious system triggers caused by ionization light. The antenna is divided in 24 Regions Of Interest (ROIs) each one corresponding to one klystron. If an arc precursor is detected in a ROI, power is reduced locally with subsequent potential damage and plasma disruption avoided. The power is subsequently reinstated if, during a defined interval of time, arcing is confirmed not to be present by image analysis. This system was successfully commissioned during the restart phase and beginning of the 2013 scientific campaign. Since its installation and commissioning, arcs and related phenomena have been prevented. In this contribution we briefly describe the camera, image processing, and real time control systems. Most importantly, we demonstrate that an LH antenna arc protection system based on CCD camera imaging systems works. Examples of both controlled and uncontrolled LH arc events and their consequences are shown.

  5. Clinical evaluation of a confocal microendoscope system for imaging the ovary

    NASA Astrophysics Data System (ADS)

    Tanbakuchi, Anthony A.; Rouse, Andrew R.; Hatch, Kenneth D.; Sampliner, Richard E.; Udovich, Josh A.; Gmitro, Arthur F.

    2008-02-01

    We have developed a mobile confocal microendoscope system that provides live cellular imaging during surgery to aid in diagnosing microscopic abnormalities including cancer. We present initial clinical trial results using the device to image ovaries in-vivo using fluorescein and ex-vivo results using acridine orange. The imaging catheter has improved depth control and localized dye delivery mechanisms than previously presented. A manual control now provides a simple way for the surgeon to adjust and optimize imaging depth during the procedure while a tiny piezo valve in the imaging catheter controls the dye delivery.

  6. Electrically optofluidic zoom system with a large zoom range and high-resolution image.

    PubMed

    Li, Lei; Yuan, Rong-Ying; Wang, Jin-Hui; Wang, Qiong-Hua

    2017-09-18

    We report an electrically controlled optofluidic zoom system which can achieve a large continuous zoom change and high-resolution image. The zoom system consists of an optofluidic zoom objective and a switchable light path which are controlled by two liquid optical shutters. The proposed zoom system can achieve a large tunable focal length range from 36mm to 92mm. And in this tuning range, the zoom system can correct aberrations dynamically, thus the image resolution is high. Due to large zoom range, the proposed imaging system incorporates both camera configuration and telescope configuration into one system. In addition, the whole system is electrically controlled by three electrowetting liquid lenses and two liquid optical shutters, therefore, the proposed system is very compact and free of mechanical moving parts. The proposed zoom system has potential to take place of conventional zoom systems.

  7. Correction of a liquid lens for 3D imaging systems

    NASA Astrophysics Data System (ADS)

    Bower, Andrew J.; Bunch, Robert M.; Leisher, Paul O.; Li, Weixu; Christopher, Lauren A.

    2012-06-01

    3D imaging systems are currently being developed using liquid lens technology for use in medical devices as well as in consumer electronics. Liquid lenses operate on the principle of electrowetting to control the curvature of a buried surface, allowing for a voltage-controlled change in focal length. Imaging systems which utilize a liquid lens allow extraction of depth information from the object field through a controlled introduction of defocus into the system. The design of such a system must be carefully considered in order to simultaneously deliver good image quality and meet the depth of field requirements for image processing. In this work a corrective model has been designed for use with the Varioptic Arctic 316 liquid lens. The design is able to be optimized for depth of field while minimizing aberrations for a 3D imaging application. The modeled performance is compared to the measured performance of the corrected system over a large range of focal lengths.

  8. Strict integrity control of biomedical images

    NASA Astrophysics Data System (ADS)

    Coatrieux, Gouenou; Maitre, Henri; Sankur, Bulent

    2001-08-01

    The control of the integrity and authentication of medical images is becoming ever more important within the Medical Information Systems (MIS). The intra- and interhospital exchange of images, such as in the PACS (Picture Archiving and Communication Systems), and the ease of copying, manipulation and distribution of images have brought forth the security aspects. In this paper we focus on the role of watermarking for MIS security and address the problem of integrity control of medical images. We discuss alternative schemes to extract verification signatures and compare their tamper detection performance.

  9. Design and construction of a high frame rate imaging system

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Waugaman, John L.; Liu, Anjun; Lu, Jian-Yu

    2002-05-01

    A new high frame rate imaging method has been developed recently [Jian-yu Lu, ``2D and 3D high frame rate imaging with limited diffraction beams,'' IEEE Trans. Ultrason. Ferroelectr. Freq. Control 44, 839-856 (1997)]. This method may have a clinical application for imaging of fast moving objects such as human hearts, velocity vector imaging, and low-speckle imaging. To implement the method, an imaging system has been designed. The system consists of one main printed circuit board (PCB) and 16 channel boards (each channel board contains 8 channels), in addition to a set-top box for connections to a personal computer (PC), a front panel board for user control and message display, and a power control and distribution board. The main board contains a field programmable gate array (FPGA) and controls all channels (each channel has also an FPGA). We will report the analog and digital circuit design and simulations, multiplayer PCB designs with commercial software (Protel 99), PCB signal integrity testing and system RFI/EMI shielding, and the assembly and construction of the entire system. [Work supported in part by Grant 5RO1 HL60301 from NIH.

  10. Logic design and implementation of FPGA for a high frame rate ultrasound imaging system

    NASA Astrophysics Data System (ADS)

    Liu, Anjun; Wang, Jing; Lu, Jian-Yu

    2002-05-01

    Recently, a method has been developed for high frame rate medical imaging [Jian-yu Lu, ``2D and 3D high frame rate imaging with limited diffraction beams,'' IEEE Trans. Ultrason. Ferroelectr. Freq. Control 44(4), 839-856 (1997)]. To realize this method, a complicated system [multiple-channel simultaneous data acquisition, large memory in each channel for storing up to 16 seconds of data at 40 MHz and 12-bit resolution, time-variable-gain (TGC) control, Doppler imaging, harmonic imaging, as well as coded transmissions] is designed. Due to the complexity of the system, field programmable gate array (FPGA) (Xilinx Spartn II) is used. In this presentation, the design and implementation of the FPGA for the system will be reported. This includes the synchronous dynamic random access memory (SDRAM) controller and other system controllers, time sharing for auto-refresh of SDRAMs to reduce peak power, transmission and imaging modality selections, ECG data acquisition and synchronization, 160 MHz delay locked loop (DLL) for accurate timing, and data transfer via either a parallel port or a PCI bus for post image processing. [Work supported in part by Grant 5RO1 HL60301 from NIH.

  11. Autonomous control systems: applications to remote sensing and image processing

    NASA Astrophysics Data System (ADS)

    Jamshidi, Mohammad

    2001-11-01

    One of the main challenges of any control (or image processing) paradigm is being able to handle complex systems under unforeseen uncertainties. A system may be called complex here if its dimension (order) is too high and its model (if available) is nonlinear, interconnected, and information on the system is uncertain such that classical techniques cannot easily handle the problem. Examples of complex systems are power networks, space robotic colonies, national air traffic control system, and integrated manufacturing plant, the Hubble Telescope, the International Space Station, etc. Soft computing, a consortia of methodologies such as fuzzy logic, neuro-computing, genetic algorithms and genetic programming, has proven to be powerful tools for adding autonomy and semi-autonomy to many complex systems. For such systems the size of soft computing control architecture will be nearly infinite. In this paper new paradigms using soft computing approaches are utilized to design autonomous controllers and image enhancers for a number of application areas. These applications are satellite array formations for synthetic aperture radar interferometry (InSAR) and enhancement of analog and digital images.

  12. Low level image processing techniques using the pipeline image processing engine in the flight telerobotic servicer

    NASA Technical Reports Server (NTRS)

    Nashman, Marilyn; Chaconas, Karen J.

    1988-01-01

    The sensory processing system for the NASA/NBS Standard Reference Model (NASREM) for telerobotic control is described. This control system architecture was adopted by NASA of the Flight Telerobotic Servicer. The control system is hierarchically designed and consists of three parallel systems: task decomposition, world modeling, and sensory processing. The Sensory Processing System is examined, and in particular the image processing hardware and software used to extract features at low levels of sensory processing for tasks representative of those envisioned for the Space Station such as assembly and maintenance are described.

  13. Depth-enhanced integral imaging display system with electrically variable image planes using polymer-dispersed liquid-crystal layers.

    PubMed

    Kim, Yunhee; Choi, Heejin; Kim, Joohwan; Cho, Seong-Woo; Kim, Youngmin; Park, Gilbae; Lee, Byoungho

    2007-06-20

    A depth-enhanced three-dimensional integral imaging system with electrically variable image planes is proposed. For implementing the variable image planes, polymer-dispersed liquid-crystal (PDLC) films and a projector are adopted as a new display system in the integral imaging. Since the transparencies of PDLC films are electrically controllable, we can make each film diffuse the projected light successively with a different depth from the lens array. As a result, the proposed method enables control of the location of image planes electrically and enhances the depth. The principle of the proposed method is described, and experimental results are also presented.

  14. Correction And Use Of Jitter In Television Images

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Fender, Derek H.; Fender, Antony R. H.

    1989-01-01

    Proposed system stabilizes jittering television image and/or measures jitter to extract information on motions of objects in image. Alternative version, system controls lateral motion on camera to generate stereoscopic views to measure distances to objects. In another version, motion of camera controlled to keep object in view. Heart of system is digital image-data processor called "jitter-miser", which includes frame buffer and logic circuits to correct for jitter in image. Signals from motion sensors on camera sent to logic circuits and processed into corrections for motion along and across line of sight.

  15. EOS mapping accuracy study

    NASA Technical Reports Server (NTRS)

    Forrest, R. B.; Eppes, T. A.; Ouellette, R. J.

    1973-01-01

    Studies were performed to evaluate various image positioning methods for possible use in the earth observatory satellite (EOS) program and other earth resource imaging satellite programs. The primary goal is the generation of geometrically corrected and registered images, positioned with respect to the earth's surface. The EOS sensors which were considered were the thematic mapper, the return beam vidicon camera, and the high resolution pointable imager. The image positioning methods evaluated consisted of various combinations of satellite data and ground control points. It was concluded that EOS attitude control system design must be considered as a part of the image positioning problem for EOS, along with image sensor design and ground image processing system design. Study results show that, with suitable efficiency for ground control point selection and matching activities during data processing, extensive reliance should be placed on use of ground control points for positioning the images obtained from EOS and similar programs.

  16. Photoacoustic and ultrasound dual-modality imaging of human peripheral joints

    NASA Astrophysics Data System (ADS)

    Xu, Guan; Rajian, Justin R.; Girish, Gandikota; Kaplan, Mariana J.; Fowlkes, J. Brian; Carson, Paul L.; Wang, Xueding

    2013-01-01

    A photoacoustic (PA) and ultrasound (US) dual modality system, for imaging human peripheral joints, is introduced. The system utilizes a commercial US unit for both US control imaging and PA signal acquisition. Preliminary in vivo evaluation of the system, on normal volunteers, revealed that this system can recover both the structural and functional information of intra- and extra-articular tissues. Confirmed by the control US images, the system, on the PA mode, can differentiate tendon from surrounding soft tissue based on the endogenous optical contrast. Presenting both morphological and pathological information in joint, this system holds promise for diagnosis and characterization of inflammatory joint diseases such as rheumatoid arthritis.

  17. A design of endoscopic imaging system for hyper long pipeline based on wheeled pipe robot

    NASA Astrophysics Data System (ADS)

    Zheng, Dongtian; Tan, Haishu; Zhou, Fuqiang

    2017-03-01

    An endoscopic imaging system of hyper long pipeline is designed to acquire the inner surface image in advance for the hyper long pipeline detects measurement. The system consists of structured light sensors, pipe robots and control system. The pipe robot is in the form of wheel structure, with the sensor which is at the front of the vehicle body. The control system is at the tail of the vehicle body in the form of upper and lower computer. The sensor can be translated and scanned in three steps: walking, lifting and scanning, then the inner surface image can be acquired at a plurality of positions and different angles. The results of imaging experiments show that the system's transmission distance is longer, the acquisition angle is more diverse and the result is more comprehensive than the traditional imaging system, which lays an important foundation for later inner surface vision measurement.

  18. Medical imaging systems

    DOEpatents

    Frangioni, John V

    2013-06-25

    A medical imaging system provides simultaneous rendering of visible light and diagnostic or functional images. The system may be portable, and may include adapters for connecting various light sources and cameras in open surgical environments or laparascopic or endoscopic environments. A user interface provides control over the functionality of the integrated imaging system. In one embodiment, the system provides a tool for surgical pathology.

  19. An arc control and protection system for the JET lower hybrid antenna based on an imaging system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Figueiredo, J., E-mail: joao.figueiredo@jet.efda.org; Mailloux, J.; Kirov, K.

    Arcs are the potentially most dangerous events related to Lower Hybrid (LH) antenna operation. If left uncontrolled they can produce damage and cause plasma disruption by impurity influx. To address this issue an arc real time control and protection imaging system for the Joint European Torus (JET) LH antenna has been implemented. The LH system is one of the additional heating systems at JET. It comprises 24 microwave generators (klystrons, operating at 3.7 GHz) providing up to 5 MW of heating and current drive to the JET plasma. This is done through an antenna composed of an array of waveguidesmore » facing the plasma. The protection system presented here is based primarily on an imaging arc detection and real time control system. It has adapted the ITER like wall hotspot protection system using an identical CCD camera and real time image processing unit. A filter has been installed to avoid saturation and spurious system triggers caused by ionization light. The antenna is divided in 24 Regions Of Interest (ROIs) each one corresponding to one klystron. If an arc precursor is detected in a ROI, power is reduced locally with subsequent potential damage and plasma disruption avoided. The power is subsequently reinstated if, during a defined interval of time, arcing is confirmed not to be present by image analysis. This system was successfully commissioned during the restart phase and beginning of the 2013 scientific campaign. Since its installation and commissioning, arcs and related phenomena have been prevented. In this contribution we briefly describe the camera, image processing, and real time control systems. Most importantly, we demonstrate that an LH antenna arc protection system based on CCD camera imaging systems works. Examples of both controlled and uncontrolled LH arc events and their consequences are shown.« less

  20. Direct laser additive fabrication system with image feedback control

    DOEpatents

    Griffith, Michelle L.; Hofmeister, William H.; Knorovsky, Gerald A.; MacCallum, Danny O.; Schlienger, M. Eric; Smugeresky, John E.

    2002-01-01

    A closed-loop, feedback-controlled direct laser fabrication system is disclosed. The feedback refers to the actual growth conditions obtained by real-time analysis of thermal radiation images. The resulting system can fabricate components with severalfold improvement in dimensional tolerances and surface finish.

  1. The AdaptiSPECT Imaging Aperture

    PubMed Central

    Chaix, Cécile; Moore, Jared W.; Van Holen, Roel; Barrett, Harrison H.; Furenlid, Lars R.

    2015-01-01

    In this paper, we present the imaging aperture of an adaptive SPECT imaging system being developed at the Center for Gamma Ray Imaging (AdaptiSPECT). AdaptiSPECT is designed to automatically change its configuration in response to preliminary data, in order to improve image quality for a particular task. In a traditional pinhole SPECT imaging system, the characteristics (magnification, resolution, field of view) are set by the geometry of the system, and any modification can be accomplished only by manually changing the collimator and the distance of the detector to the center of the field of view. Optimization of the imaging system for a specific task on a specific individual is therefore difficult. In an adaptive SPECT imaging system, on the other hand, the configuration can be conveniently changed under computer control. A key component of an adaptive SPECT system is its aperture. In this paper, we present the design, specifications, and fabrication of the adaptive pinhole aperture that will be used for AdaptiSPECT, as well as the controls that enable autonomous adaptation. PMID:27019577

  2. Navigation technique for MR-endoscope system using a wireless accelerometer-based remote control device.

    PubMed

    Kumamoto, Etsuko; Takahashi, Akihiro; Matsuoka, Yuichiro; Morita, Yoshinori; Kutsumi, Hiromu; Azuma, Takeshi; Kuroda, Kagayaki

    2013-01-01

    The MR-endoscope system can perform magnetic resonance (MR) imaging during endoscopy and show the images obtained by using endoscope and MR. The MR-endoscope system can acquire a high-spatial resolution MR image with an intraluminal radiofrequency (RF) coil, and the navigation system shows the scope's location and orientation inside the human body and indicates MR images with a scope view. In order to conveniently perform an endoscopy and MR procedure, the design of the user interface is very important because it provides useful information. In this study, we propose a navigation system using a wireless accelerometer-based controller with Bluetooth technology and a navigation technique to set the intraluminal RF coil using the navigation system. The feasibility of using this wireless controller in the MR shield room was validated via phantom examinations of the influence on MR procedures and navigation accuracy. In vitro examinations using an isolated porcine stomach demonstrated the effectiveness of the navigation technique using a wireless remote-control device.

  3. Image processing occupancy sensor

    DOEpatents

    Brackney, Larry J.

    2016-09-27

    A system and method of detecting occupants in a building automation system environment using image based occupancy detection and position determinations. In one example, the system includes an image processing occupancy sensor that detects the number and position of occupants within a space that has controllable building elements such as lighting and ventilation diffusers. Based on the position and location of the occupants, the system can finely control the elements to optimize conditions for the occupants, optimize energy usage, among other advantages.

  4. Functionality and operation of fluoroscopic automatic brightness control/automatic dose rate control logic in modern cardiovascular and interventional angiography systems: A Report of Task Group 125 Radiography/Fluoroscopy Subcommittee, Imaging Physics Committee, Science Council

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rauch, Phillip; Lin, Pei-Jan Paul; Balter, Stephen

    2012-05-15

    Task Group 125 (TG 125) was charged with investigating the functionality of fluoroscopic automatic dose rate and image quality control logic in modern angiographic systems, paying specific attention to the spectral shaping filters and variations in the selected radiologic imaging parameters. The task group was also charged with describing the operational aspects of the imaging equipment for the purpose of assisting the clinical medical physicist with clinical set-up and performance evaluation. Although there are clear distinctions between the fluoroscopic operation of an angiographic system and its acquisition modes (digital cine, digital angiography, digital subtraction angiography, etc.), the scope of thismore » work was limited to the fluoroscopic operation of the systems studied. The use of spectral shaping filters in cardiovascular and interventional angiography equipment has been shown to reduce patient dose. If the imaging control algorithm were programmed to work in conjunction with the selected spectral filter, and if the generator parameters were optimized for the selected filter, then image quality could also be improved. Although assessment of image quality was not included as part of this report, it was recognized that for fluoroscopic imaging the parameters that influence radiation output, differential absorption, and patient dose are also the same parameters that influence image quality. Therefore, this report will utilize the terminology ''automatic dose rate and image quality'' (ADRIQ) when describing the control logic in modern interventional angiographic systems and, where relevant, will describe the influence of controlled parameters on the subsequent image quality. A total of 22 angiography units were investigated by the task group and of these one each was chosen as representative of the equipment manufactured by GE Healthcare, Philips Medical Systems, Shimadzu Medical USA, and Siemens Medical Systems. All equipment, for which measurement data were included in this report, was manufactured within the three year period from 2006 to 2008. Using polymethylmethacrylate (PMMA) plastic to simulate patient attenuation, each angiographic imaging system was evaluated by recording the following parameters: tube potential in units of kilovolts peak (kVp), tube current in units of milliamperes (mA), pulse width (PW) in units of milliseconds (ms), spectral filtration setting, and patient air kerma rate (PAKR) as a function of the attenuator thickness. Data were graphically plotted to reveal the manner in which the ADRIQ control logic responded to changes in object attenuation. There were similarities in the manner in which the ADRIQ control logic operated that allowed the four chosen devices to be divided into two groups, with two of the systems in each group. There were also unique approaches to the ADRIQ control logic that were associated with some of the systems, and these are described in the report. The evaluation revealed relevant information about the testing procedure and also about the manner in which different manufacturers approach the utilization of spectral filtration, pulsed fluoroscopy, and maximum PAKR limitation. This information should be particularly valuable to the clinical medical physicist charged with acceptance testing and performance evaluation of modern angiographic systems.« less

  5. Functionality and operation of fluoroscopic automatic brightness control/automatic dose rate control logic in modern cardiovascular and interventional angiography systems: a report of Task Group 125 Radiography/Fluoroscopy Subcommittee, Imaging Physics Committee, Science Council.

    PubMed

    Rauch, Phillip; Lin, Pei-Jan Paul; Balter, Stephen; Fukuda, Atsushi; Goode, Allen; Hartwell, Gary; LaFrance, Terry; Nickoloff, Edward; Shepard, Jeff; Strauss, Keith

    2012-05-01

    Task Group 125 (TG 125) was charged with investigating the functionality of fluoroscopic automatic dose rate and image quality control logic in modern angiographic systems, paying specific attention to the spectral shaping filters and variations in the selected radiologic imaging parameters. The task group was also charged with describing the operational aspects of the imaging equipment for the purpose of assisting the clinical medical physicist with clinical set-up and performance evaluation. Although there are clear distinctions between the fluoroscopic operation of an angiographic system and its acquisition modes (digital cine, digital angiography, digital subtraction angiography, etc.), the scope of this work was limited to the fluoroscopic operation of the systems studied. The use of spectral shaping filters in cardiovascular and interventional angiography equipment has been shown to reduce patient dose. If the imaging control algorithm were programmed to work in conjunction with the selected spectral filter, and if the generator parameters were optimized for the selected filter, then image quality could also be improved. Although assessment of image quality was not included as part of this report, it was recognized that for fluoroscopic imaging the parameters that influence radiation output, differential absorption, and patient dose are also the same parameters that influence image quality. Therefore, this report will utilize the terminology "automatic dose rate and image quality" (ADRIQ) when describing the control logic in modern interventional angiographic systems and, where relevant, will describe the influence of controlled parameters on the subsequent image quality. A total of 22 angiography units were investigated by the task group and of these one each was chosen as representative of the equipment manufactured by GE Healthcare, Philips Medical Systems, Shimadzu Medical USA, and Siemens Medical Systems. All equipment, for which measurement data were included in this report, was manufactured within the three year period from 2006 to 2008. Using polymethylmethacrylate (PMMA) plastic to simulate patient attenuation, each angiographic imaging system was evaluated by recording the following parameters: tube potential in units of kilovolts peak (kVp), tube current in units of milliamperes (mA), pulse width (PW) in units of milliseconds (ms), spectral filtration setting, and patient air kerma rate (PAKR) as a function of the attenuator thickness. Data were graphically plotted to reveal the manner in which the ADRIQ control logic responded to changes in object attenuation. There were similarities in the manner in which the ADRIQ control logic operated that allowed the four chosen devices to be divided into two groups, with two of the systems in each group. There were also unique approaches to the ADRIQ control logic that were associated with some of the systems, and these are described in the report. The evaluation revealed relevant information about the testing procedure and also about the manner in which different manufacturers approach the utilization of spectral filtration, pulsed fluoroscopy, and maximum PAKR limitation. This information should be particularly valuable to the clinical medical physicist charged with acceptance testing and performance evaluation of modern angiographic systems.

  6. 75 FR 68200 - Medical Devices; Radiology Devices; Reclassification of Full-Field Digital Mammography System

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-05

    ... exposure control, image processing and reconstruction programs, patient and equipment supports, component..., acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and... may include was revised by adding automatic exposure control, image processing and reconstruction...

  7. Stroboscopic Image Modulation to Reduce the Visual Blur of an Object Being Viewed by an Observer Experiencing Vibration

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K. (Inventor); Adelstein, Bernard D. (Inventor); Anderson, Mark R. (Inventor); Beutter, Brent R. (Inventor); Ahumada, Albert J., Jr. (Inventor); McCann, Robert S. (Inventor)

    2014-01-01

    A method and apparatus for reducing the visual blur of an object being viewed by an observer experiencing vibration. In various embodiments of the present invention, the visual blur is reduced through stroboscopic image modulation (SIM). A SIM device is operated in an alternating "on/off" temporal pattern according to a SIM drive signal (SDS) derived from the vibration being experienced by the observer. A SIM device (controlled by a SIM control system) operates according to the SDS serves to reduce visual blur by "freezing" (or reducing an image's motion to a slow drift) the visual image of the viewed object. In various embodiments, the SIM device is selected from the group consisting of illuminator(s), shutter(s), display control system(s), and combinations of the foregoing (including the use of multiple illuminators, shutters, and display control systems).

  8. Sensing system for detection and control of deposition on pendant tubes in recovery and power boilers

    DOEpatents

    Kychakoff, George; Afromowitz, Martin A; Hugle, Richard E

    2005-06-21

    A system for detection and control of deposition on pendant tubes in recovery and power boilers includes one or more deposit monitoring sensors operating in infrared regions and about 4 or 8.7 microns and directly producing images of the interior of the boiler. An image pre-processing circuit (95) in which a 2-D image formed by the video data input is captured, and includes a low pass filter for performing noise filtering of said video input. An image segmentation module (105) for separating the image of the recovery boiler interior into background, pendant tubes, and deposition. An image-understanding unit (115) matches derived regions to a 3-D model of said boiler. It derives a 3-D structure the deposition on pendant tubes in the boiler and provides the information about deposits to the plant distributed control system (130) for more efficient operation of the plant pendant tube cleaning and operating systems.

  9. Quality Control of Structural MRI Images Applied Using FreeSurfer—A Hands-On Workflow to Rate Motion Artifacts

    PubMed Central

    Backhausen, Lea L.; Herting, Megan M.; Buse, Judith; Roessner, Veit; Smolka, Michael N.; Vetter, Nora C.

    2016-01-01

    In structural magnetic resonance imaging motion artifacts are common, especially when not scanning healthy young adults. It has been shown that motion affects the analysis with automated image-processing techniques (e.g., FreeSurfer). This can bias results. Several developmental and adult studies have found reduced volume and thickness of gray matter due to motion artifacts. Thus, quality control is necessary in order to ensure an acceptable level of quality and to define exclusion criteria of images (i.e., determine participants with most severe artifacts). However, information about the quality control workflow and image exclusion procedure is largely lacking in the current literature and the existing rating systems differ. Here, we propose a stringent workflow of quality control steps during and after acquisition of T1-weighted images, which enables researchers dealing with populations that are typically affected by motion artifacts to enhance data quality and maximize sample sizes. As an underlying aim we established a thorough quality control rating system for T1-weighted images and applied it to the analysis of developmental clinical data using the automated processing pipeline FreeSurfer. This hands-on workflow and quality control rating system will aid researchers in minimizing motion artifacts in the final data set, and therefore enhance the quality of structural magnetic resonance imaging studies. PMID:27999528

  10. PScan 1.0: flexible software framework for polygon based multiphoton microscopy

    NASA Astrophysics Data System (ADS)

    Li, Yongxiao; Lee, Woei Ming

    2016-12-01

    Multiphoton laser scanning microscopes exhibit highly localized nonlinear optical excitation and are powerful instruments for in-vivo deep tissue imaging. Customized multiphoton microscopy has a significantly superior performance for in-vivo imaging because of precise control over the scanning and detection system. To date, there have been several flexible software platforms catered to custom built microscopy systems i.e. ScanImage, HelioScan, MicroManager, that perform at imaging speeds of 30-100fps. In this paper, we describe a flexible software framework for high speed imaging systems capable of operating from 5 fps to 1600 fps. The software is based on the MATLAB image processing toolbox. It has the capability to communicate directly with a high performing imaging card (Matrox Solios eA/XA), thus retaining high speed acquisition. The program is also designed to communicate with LabVIEW and Fiji for instrument control and image processing. Pscan 1.0 can handle high imaging rates and contains sufficient flexibility for users to adapt to their high speed imaging systems.

  11. Identification Of Cells With A Compact Microscope Imaging System With Intelligent Controls

    NASA Technical Reports Server (NTRS)

    McDowell, Mark (Inventor)

    2006-01-01

    A Microscope Imaging System (CMIS) with intelligent controls is disclosed that provides techniques for scanning, identifying, detecting and tracking mic?oscopic changes in selected characteristics or features of various surfaces including, but not limited to, cells, spheres, and manufactured products subject to difficult-to-see imperfections. The practice of the present invention provides applications that include colloidal hard spheres experiments, biological cell detection for patch clamping, cell movement and tracking, as well as defect identification in products, such as semiconductor devices, where surface damage can be significant, but difficult to detect. The CMIS system is a machine vision system, which combines intelligent image processing with remote control capabilities and provides the ability to autofocus on a microscope sample, automatically scan an image, and perform machine vision analysis on multiple samples simultaneously.

  12. Tracking of Cells with a Compact Microscope Imaging System with Intelligent Controls

    NASA Technical Reports Server (NTRS)

    McDowell, Mark (Inventor)

    2007-01-01

    A Microscope Imaging System (CMIS) with intelligent controls is disclosed that provides techniques for scanning, identifying, detecting and tracking microscopic changes in selected characteristics or features of various surfaces including, but not limited to, cells, spheres, and manufactured products subject to difficult-to-see imperfections. The practice of the present invention provides applications that include colloidal hard spheres experiments, biological cell detection for patch clamping, cell movement and tracking, as well as defect identification in products, such as semiconductor devices, where surface damage can be significant, but difficult to detect. The CMIS system is a machine vision system, which combines intelligent image processing with remote control capabilities and provides the ability to autofocus on a microscope sample, automatically scan an image, and perform machine vision analysis on multiple samples simultaneously

  13. Tracking of cells with a compact microscope imaging system with intelligent controls

    NASA Technical Reports Server (NTRS)

    McDowell, Mark (Inventor)

    2007-01-01

    A Microscope Imaging System (CMIS) with intelligent controls is disclosed that provides techniques for scanning, identifying, detecting and tracking microscopic changes in selected characteristics or features of various surfaces including, but not limited to, cells, spheres, and manufactured products subject to difficult-to-see imperfections. The practice of the present invention provides applications that include colloidal hard spheres experiments, biological cell detection for patch clamping, cell movement and tracking, as well as defect identification in products, such as semiconductor devices, where surface damage can be significant, but difficult to detect. The CMIS system is a machine vision system, which combines intelligent image processing with remote control capabilities and provides the ability to auto-focus on a microscope sample, automatically scan an image, and perform machine vision analysis on multiple samples simultaneously.

  14. Enhancement of tracking performance in electro-optical system based on servo control algorithm

    NASA Astrophysics Data System (ADS)

    Choi, WooJin; Kim, SungSu; Jung, DaeYoon; Seo, HyoungKyu

    2017-10-01

    Modern electro-optical surveillance and reconnaissance systems require tracking capability to get exact images of target or to accurately direct the line of sight to target which is moving or still. This leads to the tracking system composed of image based tracking algorithm and servo control algorithm. In this study, we focus on the servo control function to minimize the overshoot in the tracking motion and do not miss the target. The scheme is to limit acceleration and velocity parameters in the tracking controller, depending on the target state information in the image. We implement the proposed techniques by creating a system model of DIRCM and simulate the same environment, validate the performance on the actual equipment.

  15. Non-destructive Faraday imaging of dynamically controlled ultracold atoms

    NASA Astrophysics Data System (ADS)

    Gajdacz, Miroslav; Pedersen, Poul; Mørch, Troels; Hilliard, Andrew; Arlt, Jan; Sherson, Jacob

    2013-05-01

    We investigate non-destructive measurements of ultra-cold atomic clouds based on dark field imaging of spatially resolved Faraday rotation. In particular, we pursue applications to dynamically controlled ultracold atoms. The dependence of the Faraday signal on laser detuning, atomic density and temperature is characterized in a detailed comparison with theory. In particular the destructivity per measurement is extremely low and we illustrate this by imaging the same cloud up to 2000 times. The technique is applied to avoid the effect of shot-to-shot fluctuations in atom number calibration. Adding dynamic changes to system parameters, we demonstrate single-run vector magnetic field imaging and single-run spatial imaging of the system's dynamic behavior. The method can be implemented particularly easily in standard imaging systems by the insertion of an extra polarizing beam splitter. These results are steps towards quantum state engineering using feedback control of ultracold atoms.

  16. Deployment of a Fully-Automated Green Fluorescent Protein Imaging System in a High Arctic Autonomous Greenhouse

    PubMed Central

    Abboud, Talal; Bamsey, Matthew; Paul, Anna-Lisa; Graham, Thomas; Braham, Stephen; Noumeir, Rita; Berinstain, Alain; Ferl, Robert

    2013-01-01

    Higher plants are an integral part of strategies for sustained human presence in space. Space-based greenhouses have the potential to provide closed-loop recycling of oxygen, water and food. Plant monitoring systems with the capacity to remotely observe the condition of crops in real-time within these systems would permit operators to take immediate action to ensure optimum system yield and reliability. One such plant health monitoring technique involves the use of reporter genes driving fluorescent proteins as biological sensors of plant stress. In 2006 an initial prototype green fluorescent protein imager system was deployed at the Arthur Clarke Mars Greenhouse located in the Canadian High Arctic. This prototype demonstrated the advantageous of this biosensor technology and underscored the challenges in collecting and managing telemetric data from exigent environments. We present here the design and deployment of a second prototype imaging system deployed within and connected to the infrastructure of the Arthur Clarke Mars Greenhouse. This is the first imager to run autonomously for one year in the un-crewed greenhouse with command and control conducted through the greenhouse satellite control system. Images were saved locally in high resolution and sent telemetrically in low resolution. Imager hardware is described, including the custom designed LED growth light and fluorescent excitation light boards, filters, data acquisition and control system, and basic sensing and environmental control. Several critical lessons learned related to the hardware of small plant growth payloads are also elaborated. PMID:23486220

  17. Use of a gesture user interface as a touchless image navigation system in dental surgery: Case series report

    PubMed Central

    Elizondo, María L.

    2014-01-01

    Purpose The purposes of this study were to develop a workstation computer that allowed intraoperative touchless control of diagnostic and surgical images by a surgeon, and to report the preliminary experience with the use of the system in a series of cases in which dental surgery was performed. Materials and Methods A custom workstation with a new motion sensing input device (Leap Motion) was set up in order to use a natural user interface (NUI) to manipulate the imaging software by hand gestures. The system allowed intraoperative touchless control of the surgical images. Results For the first time in the literature, an NUI system was used for a pilot study during 11 dental surgery procedures including tooth extractions, dental implant placements, and guided bone regeneration. No complications were reported. The system performed very well and was very useful. Conclusion The proposed system fulfilled the objective of providing touchless access and control of the system of images and a three-dimensional surgical plan, thus allowing the maintenance of sterile conditions. The interaction between surgical staff, under sterile conditions, and computer equipment has been a key issue. The solution with an NUI with touchless control of the images seems to be closer to an ideal. The cost of the sensor system is quite low; this could facilitate its incorporation into the practice of routine dental surgery. This technology has enormous potential in dental surgery and other healthcare specialties. PMID:24944966

  18. Teleradiology system using a magneto-optical disk and N-ISDN

    NASA Astrophysics Data System (ADS)

    Ban, Hideyuki; Osaki, Takanobu; Matsuo, Hitoshi; Okabe, Akifumi; Nakajima, Kotaro; Ohyama, Nagaaki

    1997-05-01

    We have developed a new teleradiology system that provides a fast response and secure data transmission while using N- ISDN communication and an ISC magneto-optical disk that is specialized for medical use. The system consists of PC-based terminals connected to a N-ISDN line and the ISC disk. The system uses two types of data: the control data needed for various operational functions and the image data. For quick response, only the much smaller quantity of control data is sent through the N-ISDN during the actual conference. The bulk of the image data is sent to each site on duplicate ISC disks before the conference. The displaying and processing of images are executed using the local data on the ISC disk. We used this system for a trial teleconsultation between two hospitals. The response time needed to display a 2-Mbyte image was 4 seconds. The telepointer could be controlled with no noticeable delay by sending only the pointer's coordinates. Also, since the patient images were exchanged via the ISC disks only, unauthorized access to the patient images through the N-ISDN was prevented. Thus, this trial provides a preliminary demonstration of the usefulness of this system for clinical use.

  19. Automated Steering Control Design by Visual Feedback Approach —System Identification and Control Experiments with a Radio-Controlled Car—

    NASA Astrophysics Data System (ADS)

    Fujiwara, Yukihiro; Yoshii, Masakazu; Arai, Yasuhito; Adachi, Shuichi

    Advanced safety vehicle(ASV)assists drivers’ manipulation to avoid trafic accidents. A variety of researches on automatic driving systems are necessary as an element of ASV. Among them, we focus on visual feedback approach in which the automatic driving system is realized by recognizing road trajectory using image information. The purpose of this paper is to examine the validity of this approach by experiments using a radio-controlled car. First, a practical image processing algorithm to recognize white lines on the road is proposed. Second, a model of the radio-controlled car is built by system identication experiments. Third, an automatic steering control system is designed based on H∞ control theory. Finally, the effectiveness of the designed control system is examined via traveling experiments.

  20. Guidance for Efficient Small Animal Imaging Quality Control.

    PubMed

    Osborne, Dustin R; Kuntner, Claudia; Berr, Stuart; Stout, David

    2017-08-01

    Routine quality control is a critical aspect of properly maintaining high-performance small animal imaging instrumentation. A robust quality control program helps produce more reliable data both for academic purposes and as proof of system performance for contract imaging work. For preclinical imaging laboratories, the combination of costs and available resources often limits their ability to produce efficient and effective quality control programs. This work presents a series of simplified quality control procedures that are accessible to a wide range of preclinical imaging laboratories. Our intent is to provide minimum guidelines for routine quality control that can assist preclinical imaging specialists in setting up an appropriate quality control program for their facility.

  1. Influence of grid control and object detection on radiation exposure and image quality using mobile C-arms - first results.

    PubMed

    Gosch, D; Ratzmer, A; Berauer, P; Kahn, T

    2007-09-01

    The objective of this study was to examine the extent to which the image quality on mobile C-arms can be improved by an innovative exposure rate control system (grid control). In addition, the possible dose reduction in the pulsed fluoroscopy mode using 25 pulses/sec produced by automatic adjustment of the pulse rate through motion detection was to be determined. As opposed to conventional exposure rate control systems, which use a measuring circle in the center of the field of view, grid control is based on a fine mesh of square cells which are overlaid on the entire fluoroscopic image. The system uses only those cells for exposure control that are covered by the object to be visualized. This is intended to ensure optimally exposed images, regardless of the size, shape and position of the object to be visualized. The system also automatically detects any motion of the object. If a pulse rate of 25 pulses/sec is selected and no changes in the image are observed, the pulse rate used for pulsed fluoroscopy is gradually reduced. This may decrease the radiation exposure. The influence of grid control on image quality was examined using an anthropomorphic phantom. The dose reduction achieved with the help of object detection was determined by evaluating the examination data of 146 patients from 5 different countries. The image of the static phantom made with grid control was always optimally exposed, regardless of the position of the object to be visualized. The average dose reduction when using 25 pulses/sec resulting from object detection and automatic down-pulsing was 21 %, and the maximum dose reduction was 60 %. Grid control facilitates C-arm operation, since optimum image exposure can be obtained independently of object positioning. Object detection may lead to a reduction in radiation exposure for the patient and operating staff.

  2. An active-optics image-motion compensation technology application for high-speed searching and infrared detection system

    NASA Astrophysics Data System (ADS)

    Wu, Jianping; Lu, Fei; Zou, Kai; Yan, Hong; Wan, Min; Kuang, Yan; Zhou, Yanqing

    2018-03-01

    An ultra-high angular velocity and minor-caliber high-precision stably control technology application for active-optics image-motion compensation, is put forward innovatively in this paper. The image blur problem due to several 100°/s high-velocity relative motion between imaging system and target is theoretically analyzed. The velocity match model of detection system and active optics compensation system is built, and active optics image motion compensation platform experiment parameters are designed. Several 100°/s high-velocity high-precision control optics compensation technology is studied and implemented. The relative motion velocity is up to 250°/s, and image motion amplitude is more than 20 pixel. After the active optics compensation, motion blur is less than one pixel. The bottleneck technology of ultra-high angular velocity and long exposure time in searching and infrared detection system is successfully broke through.

  3. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.; Wu, Chris K.; Lin, Y. H.

    1991-01-01

    A system was developed for displaying computer graphics images of space objects and the use of the system was demonstrated as a testbed for evaluating vision systems for space applications. In order to evaluate vision systems, it is desirable to be able to control all factors involved in creating the images used for processing by the vision system. Considerable time and expense is involved in building accurate physical models of space objects. Also, precise location of the model relative to the viewer and accurate location of the light source require additional effort. As part of this project, graphics models of space objects such as the Solarmax satellite are created that the user can control the light direction and the relative position of the object and the viewer. The work is also aimed at providing control of hue, shading, noise and shadows for use in demonstrating and testing imaging processing techniques. The simulated camera data can provide XYZ coordinates, pitch, yaw, and roll for the models. A physical model is also being used to provide comparison of camera images with the graphics images.

  4. 77 FR 59941 - Prospective Grant of Exclusive License: Terahertz Scanning Systems for Cancer Pathology

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-01

    ... Computer-Controlled Adaptive Near Field Imaging of Biological Systems'' Patent application No. Territory... licensure describe and claim a terahertz (THz) imaging system that may overcome the limitations of existing.... Additionally, the THz imaging system describes a sensor head geometry that eliminates the requirement to...

  5. Overcoming Dynamic Disturbances in Imaging Systems

    NASA Technical Reports Server (NTRS)

    Young, Eric W.; Dente, Gregory C.; Lyon, Richard G.; Chesters, Dennis; Gong, Qian

    2000-01-01

    We develop and discuss a methodology with the potential to yield a significant reduction in complexity, cost, and risk of space-borne optical systems in the presence of dynamic disturbances. More robust systems almost certainly will be a result as well. Many future space-based and ground-based optical systems will employ optical control systems to enhance imaging performance. The goal of the optical control subsystem is to determine the wavefront aberrations and remove them. Ideally reducing an aberrated image of the object under investigation to a sufficiently clear (usually diffraction-limited) image. Control will likely be distributed over several elements. These elements may include telescope primary segments, telescope secondary, telescope tertiary, deformable mirror(s), fine steering mirror(s), etc. The last two elements, in particular, may have to provide dynamic control. These control subsystems may become elaborate indeed. But robust system performance will require evaluation of the image quality over a substantial range and in a dynamic environment. Candidate systems for improvement in the Earth Sciences Enterprise could include next generation Landsat systems or atmospheric sensors for dynamic imaging of individual, severe storms. The technology developed here could have a substantial impact on the development of new systems in the Space Science Enterprise; such as the Next Generation Space Telescope(NGST) and its follow-on the Next NGST. Large Interferometric Systems of non-zero field, such as Planet Finder and Submillimeter Probe of the Evolution of Cosmic Structure, could benefit. These systems most likely will contain large, flexible optomechanical structures subject to dynamic disturbance. Furthermore, large systems for high resolution imaging of planets or the sun from space may also benefit. Tactical and Strategic Defense systems will need to image very small targets as well and could benefit from the technology developed here. We discuss a novel speckle imaging technique with the potential to separate dynamic aberrations from static aberrations. Post-processing of a set of image data, using an algorithm based on this technique, should work for all but the lowest light levels and highest frequency dynamic environments. This technique may serve to reduce the complexity of the control system and provide for robust, fault-tolerant, reduced risk operation. For a given object, a short exposure image is "frozen" on the focal plane in the presence of the environmental disturbance (turbulence, jitter, etc.). A key factor is that this imaging data exhibits frame-to-frame linear shift invariance. Therefore, although the Point Spread Function is varying from frame to frame, the source is fixed; and each short exposure contains object spectrum data out to the diffraction limit of the imaging system. This novel speckle imaging technique uses the Knox-Thompson method. The magnitude of the complex object spectrum is straightforward to determine by well-established approaches. The phase of the complex object spectrum is decomposed into two parts. One is a single-valued function determined by the divergence of the optical phase gradient. The other is a multi-valued function determined by the circulation of the optical phase gradient-"hidden phase." Finite difference equations are developed for the phase. The novelty of this approach is captured in the inclusion of this "hidden phase." This technique allows the diffraction-limited reconstruction of the object from the ensemble of short exposure frames while simultaneously estimating the phase as a function of time from a set of exposures.

  6. Overcoming Dynamic Disturbances in Imaging Systems

    NASA Technical Reports Server (NTRS)

    Young, Eric W.; Dente, Gregory C.; Lyon, Richard G.; Chesters, Dennis; Gong, Qian

    2000-01-01

    We develop and discuss a methodology with the potential to yield a significant reduction in complexity, cost, and risk of space-borne optical systems in the presence of dynamic disturbances. More robust systems almost certainly will be a result as well. Many future space-based and ground-based optical systems will employ optical control systems to enhance imaging performance. The goal of the optical control subsystem is to determine the wavefront aberrations and remove them. Ideally reducing an aberrated image of the object under investigation to a sufficiently clear (usually diffraction-limited) image. Control will likely be distributed over several elements. These elements may include telescope primary segments, telescope secondary, telescope tertiary, deformable mirror(s), fine steering mirror(s), etc. The last two elements, in particular, may have to provide dynamic control. These control subsystems may become elaborate indeed. But robust system performance will require evaluation of the image quality over a substantial range and in a dynamic environment. Candidate systems for improvement in the Earth Sciences Enterprise could include next generation Landsat systems or atmospheric sensors for dynamic imaging of individual, severe storms. The technology developed here could have a substantial impact on the development of new systems in the Space Science Enterprise; such as the Next Generation Space Telescope(NGST) and its follow-on the Next NGST. Large Interferometric Systems of non-zero field, such as Planet Finder and Submillimeter Probe of the Evolution of Cosmic Structure, could benefit. These systems most likely will contain large, flexible optormechanical structures subject to dynamic disturbance. Furthermore, large systems for high resolution imaging of planets or the sun from space may also benefit. Tactical and Strategic Defense systems will need to image very small targets as well and could benefit from the technology developed here. We discuss a novel speckle imaging technique with the potential to separate dynamic aberrations from static aberrations. Post-processing of a set of image data, using an algorithm based on this technique, should work for all but the lowest light levels and highest frequency dynamic environments. This technique may serve to reduce the complexity of the control system and provide for robust, fault-tolerant, reduced risk operation. For a given object, a short exposure image is "frozen" on the focal plane in the presence of the environmental disturbance (turbulence, jitter, etc.). A key factor is that this imaging data exhibits frame-to-frame linear shift invariance. Therefore, although the Point Spread Function is varying from frame to frame, the source is fixed; and each short exposure contains object spectrum data out to the diffraction limit of the imaging system. This novel speckle imaging technique uses the Knox-Thompson method. The magnitude of the complex object spectrum is straightforward to determine by well-established approaches. The phase of the complex object spectrum is decomposed into two parts. One is a single-valued function determined by the divergence of the optical phase gradient. The other is a multi-valued function determined by, the circulation of the optical phase gradient-"hidden phase." Finite difference equations are developed for the phase. The novelty of this approach is captured in the inclusion of this "hidden phase." This technique allows the diffraction-limited reconstruction of the object from the ensemble of short exposure frames while simultaneously estimating the phase as a function of time from a set of exposures.

  7. A Macintosh-Based Scientific Images Video Analysis System

    NASA Technical Reports Server (NTRS)

    Groleau, Nicolas; Friedland, Peter (Technical Monitor)

    1994-01-01

    A set of experiments was designed at MIT's Man-Vehicle Laboratory in order to evaluate the effects of zero gravity on the human orientation system. During many of these experiments, the movements of the eyes are recorded on high quality video cassettes. The images must be analyzed off-line to calculate the position of the eyes at every moment in time. To this aim, I have implemented a simple inexpensive computerized system which measures the angle of rotation of the eye from digitized video images. The system is implemented on a desktop Macintosh computer, processes one play-back frame per second and exhibits adequate levels of accuracy and precision. The system uses LabVIEW, a digital output board, and a video input board to control a VCR, digitize video images, analyze them, and provide a user friendly interface for the various phases of the process. The system uses the Concept Vi LabVIEW library (Graftek's Image, Meudon la Foret, France) for image grabbing and displaying as well as translation to and from LabVIEW arrays. Graftek's software layer drives an Image Grabber board from Neotech (Eastleigh, United Kingdom). A Colour Adapter box from Neotech provides adequate video signal synchronization. The system also requires a LabVIEW driven digital output board (MacADIOS II from GW Instruments, Cambridge, MA) controlling a slightly modified VCR remote control used mainly to advance the video tape frame by frame.

  8. Nonlinear research of an image motion stabilization system embedded in a space land-survey telescope

    NASA Astrophysics Data System (ADS)

    Somov, Yevgeny; Butyrin, Sergey; Siguerdidjane, Houria

    2017-01-01

    We consider an image motion stabilization system embedded into a space telescope for a scanning optoelectronic observation of terrestrial targets. Developed model of this system is presented taking into account physical hysteresis of piezo-ceramic driver and a time delay at a forming of digital control. We have presented elaborated algorithms for discrete filtering and digital control, obtained results on analysis of the image motion velocity oscillations in the telescope focal plane, and also methods for terrestrial and in-flight verification of the system.

  9. Robust adaptive optics systems for vision science

    NASA Astrophysics Data System (ADS)

    Burns, S. A.; de Castro, A.; Sawides, L.; Luo, T.; Sapoznik, K.

    2018-02-01

    Adaptive Optics (AO) is of growing importance for understanding the impact of retinal and systemic diseases on the retina. While AO retinal imaging in healthy eyes is now routine, AO imaging in older eyes and eyes with optical changes to the anterior eye can be difficult and requires a control and an imaging system that is resilient when there is scattering and occlusion from the cornea and lens, as well as in the presence of irregular and small pupils. Our AO retinal imaging system combines evaluation of local image quality of the pupil, with spatially programmable detection. The wavefront control system uses a woofer tweeter approach, combining an electromagnetic mirror and a MEMS mirror and a single Shack Hartmann sensor. The SH sensor samples an 8 mm exit pupil and the subject is aligned to a region within this larger system pupil using a chin and forehead rest. A spot quality metric is calculated in real time for each lenslet. Individual lenslets that do not meet the quality metric are eliminated from the processing. Mirror shapes are smoothed outside the region of wavefront control when pupils are small. The system allows imaging even with smaller irregular pupils, however because the depth of field increases under these conditions, sectioning performance decreases. A retinal conjugate micromirror array selectively directs mid-range scatter to additional detectors. This improves detection of retinal capillaries even when the confocal image has poorer image quality that includes both photoreceptors and blood vessels.

  10. Calibration of Viking imaging system pointing, image extraction, and optical navigation measure

    NASA Technical Reports Server (NTRS)

    Breckenridge, W. G.; Fowler, J. W.; Morgan, E. M.

    1977-01-01

    Pointing control and knowledge accuracy of Viking Orbiter science instruments is controlled by the scan platform. Calibration of the scan platform and the imaging system was accomplished through mathematical models. The calibration procedure and results obtained for the two Viking spacecraft are described. Included are both ground and in-flight scan platform calibrations, and the additional calibrations unique to optical navigation.

  11. High-speed laser photoacoustic imaging system combined with a digital ultrasonic imaging platform

    NASA Astrophysics Data System (ADS)

    Zeng, Lvming; Liu, Guodong; Ji, Xuanrong; Ren, Zhong; Huang, Zhen

    2009-07-01

    As a new field of combined ultrasound/photoacoustic imaging in biomedical photonics research, we present and demonstrate a high-speed laser photoacoustic imaging system combined with digital ultrasound imaging platform. In the prototype system, a new B-mode digital ultrasonic imaging system is modified as the hardware platform with 384 vertical transducer elements. The centre resonance frequency of the piezoelectric transducer is 5.0 MHz with greater than 70% pulse-echo -6dB fractional bandwidth. The modular instrument of PCI-6541 is used as the hardware control centre of the testing system, which features 32 high-speed channels to build low-skew and multi-channel system. The digital photoacoustic data is transported into computer for subsequent reconstruction at 25 MHz clock frequency. Meantime, the software system for controlling and analyzing is correspondingly explored with LabVIEW language on virtual instrument platform. In the breast tissue experiment, the reconstructed image agrees well with the original sample, and the spatial resolution of the system can reach 0.2 mm with multi-element synthetic aperture focusing technique. Therefore, the system and method may have a significant value in improving early detecting level of cancer in the breast and other organs.

  12. Development of a software based automatic exposure control system for use in image guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Morton, Daniel R.

    Modern image guided radiation therapy involves the use of an isocentrically mounted imaging system to take radiographs of a patient's position before the start of each treatment. Image guidance helps to minimize errors associated with a patients setup, but the radiation dose received by patients from imaging must be managed to ensure no additional risks. The Varian On-Board Imager (OBI) (Varian Medical Systems, Inc., Palo Alto, CA) does not have an automatic exposure control system and therefore requires exposure factors to be manually selected. Without patient specific exposure factors, images may become saturated and require multiple unnecessary exposures. A software based automatic exposure control system has been developed to predict optimal, patient specific exposure factors. The OBI system was modelled in terms of the x-ray tube output and detector response in order to calculate the level of detector saturation for any exposure situation. Digitally reconstructed radiographs are produced via ray-tracing through the patients' volumetric datasets that are acquired for treatment planning. The ray-trace determines the attenuation of the patient and subsequent x-ray spectra incident on the imaging detector. The resulting spectra are used in the detector response model to determine the exposure levels required to minimize detector saturation. Images calculated for various phantoms showed good agreement with the images that were acquired on the OBI. Overall, regions of detector saturation were accurately predicted and the detector response for non-saturated regions in images of an anthropomorphic phantom were calculated to generally be within 5 to 10 % of the measured values. Calculations were performed on patient data and found similar results as the phantom images, with the calculated images being able to determine detector saturation with close agreement to images that were acquired during treatment. Overall, it was shown that the system model and calculation method could potentially be used to predict patients' exposure factors before their treatment begins, thus preventing the need for multiple exposures.

  13. Enhanced optical design by distortion control

    NASA Astrophysics Data System (ADS)

    Thibault, Simon; Gauvin, Jonny; Doucet, Michel; Wang, Min

    2005-09-01

    The control of optical distortion is useful for the design of a variety of optical system. The most popular is the F-theta lens used in laser scanning system to produce a constant scan velocity across the image plane. Many authors have designed during the last 20 years distortion control corrector. Today, many challenging digital imaging system can use distortion the enhanced their imaging capability. A well know example is a reversed telephoto type, if the barrel distortion is increased instead of being corrected; the result is a so-called Fish-eye lens. However, if we control the barrel distortion instead of only increasing it, the resulting system can have enhanced imaging capability. This paper will present some lens design and real system examples that clearly demonstrate how the distortion control can improve the system performances such as resolution. We present innovative optical system which increases the resolution in the field of view of interest to meet the needs of specific applications. One critical issue when we designed using distortion is the optimization management. Like most challenging lens design, the automatic optimization is less reliable. Proper management keeps the lens design within the correct range, which is critical for optimal performance (size, cost, manufacturability). Many lens design presented tailor a custom merit function and approach.

  14. A telephoto camera system with shooting direction control by gaze detection

    NASA Astrophysics Data System (ADS)

    Teraya, Daiki; Hachisu, Takumi; Yendo, Tomohiro

    2015-05-01

    For safe driving, it is important for driver to check traffic conditions such as traffic lights, or traffic signs as early as soon. If on-vehicle camera takes image of important objects to understand traffic conditions from long distance and shows these to driver, driver can understand traffic conditions earlier. To take image of long distance objects clearly, the focal length of camera must be long. When the focal length is long, on-vehicle camera doesn't have enough field of view to check traffic conditions. Therefore, in order to get necessary images from long distance, camera must have long-focal length and controllability of shooting direction. In previous study, driver indicates shooting direction on displayed image taken by a wide-angle camera, a direction controllable camera takes telescopic image, and displays these to driver. However, driver uses a touch panel to indicate the shooting direction in previous study. It is cause of disturb driving. So, we propose a telephoto camera system for driving support whose shooting direction is controlled by driver's gaze to avoid disturbing drive. This proposed system is composed of a gaze detector and an active telephoto camera whose shooting direction is controlled. We adopt non-wear detecting method to avoid hindrance to drive. The gaze detector measures driver's gaze by image processing. The shooting direction of the active telephoto camera is controlled by galvanometer scanners and the direction can be switched within a few milliseconds. We confirmed that the proposed system takes images of gazing straight ahead of subject by experiments.

  15. Next Generation Parallelization Systems for Processing and Control of PDS Image Node Assets

    NASA Astrophysics Data System (ADS)

    Verma, R.

    2017-06-01

    We present next-generation parallelization tools to help Planetary Data System (PDS) Imaging Node (IMG) better monitor, process, and control changes to nearly 650 million file assets and over a dozen machines on which they are referenced or stored.

  16. Technology transfer: Imaging tracker to robotic controller

    NASA Technical Reports Server (NTRS)

    Otaguro, M. S.; Kesler, L. O.; Land, Ken; Erwin, Harry; Rhoades, Don

    1988-01-01

    The transformation of an imaging tracker to a robotic controller is described. A multimode tracker was developed for fire and forget missile systems. The tracker locks on to target images within an acquisition window using multiple image tracking algorithms to provide guidance commands to missile control systems. This basic tracker technology is used with the addition of a ranging algorithm based on sizing a cooperative target to perform autonomous guidance and control of a platform for an Advanced Development Project on automation and robotics. A ranging tracker is required to provide the positioning necessary for robotic control. A simple functional demonstration of the feasibility of this approach was performed and described. More realistic demonstrations are under way at NASA-JSC. In particular, this modified tracker, or robotic controller, will be used to autonomously guide the Man Maneuvering Unit (MMU) to targets such as disabled astronauts or tools as part of the EVA Retriever efforts. It will also be used to control the orbiter's Remote Manipulator Systems (RMS) in autonomous approach and positioning demonstrations. These efforts will also be discussed.

  17. A novel device for head gesture measurement system in combination with eye-controlled human machine interface

    NASA Astrophysics Data System (ADS)

    Lin, Chern-Sheng; Ho, Chien-Wa; Chang, Kai-Chieh; Hung, San-Shan; Shei, Hung-Jung; Yeh, Mau-Shiun

    2006-06-01

    This study describes the design and combination of an eye-controlled and a head-controlled human-machine interface system. This system is a highly effective human-machine interface, detecting head movement by changing positions and numbers of light sources on the head. When the users utilize the head-mounted display to browse a computer screen, the system will catch the images of the user's eyes with CCD cameras, which can also measure the angle and position of the light sources. In the eye-tracking system, the program in the computer will locate each center point of the pupils in the images, and record the information on moving traces and pupil diameters. In the head gesture measurement system, the user wears a double-source eyeglass frame, so the system catches images of the user's head by using a CCD camera in front of the user. The computer program will locate the center point of the head, transferring it to the screen coordinates, and then the user can control the cursor by head motions. We combine the eye-controlled and head-controlled human-machine interface system for the virtual reality applications.

  18. A Closed-Loop Proportional-Integral (PI) Control Software for Fully Mechanically Controlled Automated Electron Microscopic Tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    REN, GANG; LIU, JINXIN; LI, HONGCHANG

    A closed-loop proportional-integral (PI) control software is provided for fully mechanically controlled automated electron microscopic tomography. The software is developed based on Gatan DigitalMicrograph, and is compatible with Zeiss LIBRA 120 transmission electron microscope. However, it can be expanded to other TEM instrument with modification. The software consists of a graphical user interface, a digital PI controller, an image analyzing unit, and other drive units (i.e.: image acquire unit and goniometer drive unit). During a tomography data collection process, the image analyzing unit analyzes both the accumulated shift and defocus value of the latest acquired image, and provides the resultsmore » to the digital PI controller. The digital PI control compares the results with the preset values and determines the optimum adjustments of the goniometer. The goniometer drive unit adjusts the spatial position of the specimen according to the instructions given by the digital PI controller for the next tilt angle and image acquisition. The goniometer drive unit achieves high precision positioning by using a backlash elimination method. The major benefits of the software are: 1) the goniometer drive unit keeps pre-aligned/optimized beam conditions unchanged and achieves position tracking solely through mechanical control; 2) the image analyzing unit relies on only historical data and therefore does not require additional images/exposures; 3) the PI controller enables the system to dynamically track the imaging target with extremely low system error.« less

  19. Application of QC_DR software for acceptance testing and routine quality control of direct digital radiography systems: initial experiences using the Italian Association of Physicist in Medicine quality control protocol.

    PubMed

    Nitrosi, Andrea; Bertolini, Marco; Borasi, Giovanni; Botti, Andrea; Barani, Adriana; Rivetti, Stefano; Pierotti, Luisa

    2009-12-01

    Ideally, medical x-ray imaging systems should be designed to deliver maximum image quality at an acceptable radiation risk to the patient. Quality assurance procedures are employed to ensure that these standards are maintained. A quality control protocol for direct digital radiography (DDR) systems is described and discussed. Software to automatically process and analyze the required images was developed. In this paper, the initial results obtained on equipment of different DDR manufacturers were reported. The protocol was developed to highlight even small discrepancies in standard operating performance.

  20. Medical Devices; Hematology and Pathology Devices; Classification of the Whole Slide Imaging System. Final order.

    PubMed

    2018-01-02

    The Food and Drug Administration (FDA or we) is classifying the whole slide imaging system into class II (special controls). The special controls that apply to the device type are identified in this order and will be part of the codified language for the whole slide imaging system's classification. We are taking this action because we have determined that classifying the device into class II (special controls) will provide a reasonable assurance of safety and effectiveness of the device. We believe this action will also enhance patients' access to beneficial innovative devices, in part by reducing regulatory burdens.

  1. Compact Video Microscope Imaging System Implemented in Colloid Studies

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2002-01-01

    Long description Photographs showing fiber-optic light source, microscope and charge-coupled discharge (CCD) camera head connected to camera body, CCD camera body feeding data to image acquisition board in PC, and Cartesian robot controlled via PC board. The Compact Microscope Imaging System (CMIS) is a diagnostic tool with intelligent controls for use in space, industrial, medical, and security applications. CMIS can be used in situ with a minimum amount of user intervention. This system can scan, find areas of interest in, focus on, and acquire images automatically. Many multiple-cell experiments require microscopy for in situ observations; this is feasible only with compact microscope systems. CMIS is a miniature machine vision system that combines intelligent image processing with remote control. The software also has a user-friendly interface, which can be used independently of the hardware for further post-experiment analysis. CMIS has been successfully developed in the SML Laboratory at the NASA Glenn Research Center and adapted for use for colloid studies and is available for telescience experiments. The main innovations this year are an improved interface, optimized algorithms, and the ability to control conventional full-sized microscopes in addition to compact microscopes. The CMIS software-hardware interface is being integrated into our SML Analysis package, which will be a robust general-purpose image-processing package that can handle over 100 space and industrial applications.

  2. Technical advances of interventional fluoroscopy and flat panel image receptor.

    PubMed

    Lin, Pei-Jan Paul

    2008-11-01

    In the past decade, various radiation reducing devices and control circuits have been implemented on fluoroscopic imaging equipment. Because of the potential for lengthy fluoroscopic procedures in interventional cardiovascular angiography, these devices and control circuits have been developed for the cardiac catheterization laboratories and interventional angiography suites. Additionally, fluoroscopic systems equipped with image intensifiers have benefited from technological advances in x-ray tube, x-ray generator, and spectral shaping filter technologies. The high heat capacity x-ray tube, the medium frequency inverter generator with high performance switching capability, and the patient dose reduction spectral shaping filter had already been implemented on the image intensified fluoroscopy systems. These three underlying technologies together with the automatic dose rate and image quality (ADRIQ) control logic allow patients undergoing cardiovascular angiography procedures to benefit from "lower patient dose" with "high image quality." While photoconductor (or phosphor plate) x-ray detectors and signal capture thin film transistor (TFT) and charge coupled device (CCD) arrays are analog in nature, the advent of the flat panel image receptor allowed for fluoroscopy procedures to become more streamlined. With the analog-to-digital converter built into the data lines, the flat panel image receptor appears to become a digital device. While the transition from image intensified fluoroscopy systems to flat panel image receptor fluoroscopy systems is part of the on-going "digitization of imaging," the value of a flat panel image receptor may have to be evaluated with respect to patient dose, image quality, and clinical application capabilities. The advantage of flat panel image receptors has yet to be fully explored. For instance, the flat panel image receptor has its disadvantages as compared to the image intensifiers; the cost of the equipment is probably the most obvious. On the other hand, due to its wide dynamic range and linearity, lowering of patient dose beyond current practice could be achieved through the calibration process of the flat panel input dose rate being set to, for example, one half or less of current values. In this article various radiation saving devices and control circuits are briefly described. This includes various types of fluoroscopic systems designed to strive for reduction of patient exposure with the application of spectral shaping filters. The main thrust is to understand the ADRIQ control logic, through equipment testing, as it relates to clinical applications, and to show how this ADRIQ control logic "ties" those three technological advancements together to provide low radiation dose to the patient with high quality fluoroscopic images. Finally, rotational angiography with computed tomography (CT) and three dimensional (3-D) images utilizing flat panel technology will be reviewed as they pertain to diagnostic imaging in cardiovascular disease.

  3. Design and evaluation of controls for drift, video gain, and color balance in spaceborne facsimile cameras

    NASA Technical Reports Server (NTRS)

    Katzberg, S. J.; Kelly, W. L., IV; Rowland, C. W.; Burcher, E. E.

    1973-01-01

    The facsimile camera is an optical-mechanical scanning device which has become an attractive candidate as an imaging system for planetary landers and rovers. This paper presents electronic techniques which permit the acquisition and reconstruction of high quality images with this device, even under varying lighting conditions. These techniques include a control for low frequency noise and drift, an automatic gain control, a pulse-duration light modulation scheme, and a relative spectral gain control. Taken together, these techniques allow the reconstruction of radiometrically accurate and properly balanced color images from facsimile camera video data. These techniques have been incorporated into a facsimile camera and reproduction system, and experimental results are presented for each technique and for the complete system.

  4. Wavefront correction and high-resolution in vivo OCT imaging with an objective integrated multi-actuator adaptive lens.

    PubMed

    Bonora, Stefano; Jian, Yifan; Zhang, Pengfei; Zam, Azhar; Pugh, Edward N; Zawadzki, Robert J; Sarunic, Marinko V

    2015-08-24

    Adaptive optics is rapidly transforming microscopy and high-resolution ophthalmic imaging. The adaptive elements commonly used to control optical wavefronts are liquid crystal spatial light modulators and deformable mirrors. We introduce a novel Multi-actuator Adaptive Lens that can correct aberrations to high order, and which has the potential to increase the spread of adaptive optics to many new applications by simplifying its integration with existing systems. Our method combines an adaptive lens with an imaged-based optimization control that allows the correction of images to the diffraction limit, and provides a reduction of hardware complexity with respect to existing state-of-the-art adaptive optics systems. The Multi-actuator Adaptive Lens design that we present can correct wavefront aberrations up to the 4th order of the Zernike polynomial characterization. The performance of the Multi-actuator Adaptive Lens is demonstrated in a wide field microscope, using a Shack-Hartmann wavefront sensor for closed loop control. The Multi-actuator Adaptive Lens and image-based wavefront-sensorless control were also integrated into the objective of a Fourier Domain Optical Coherence Tomography system for in vivo imaging of mouse retinal structures. The experimental results demonstrate that the insertion of the Multi-actuator Objective Lens can generate arbitrary wavefronts to correct aberrations down to the diffraction limit, and can be easily integrated into optical systems to improve the quality of aberrated images.

  5. Automated daily quality control analysis for mammography in a multi-unit imaging center.

    PubMed

    Sundell, Veli-Matti; Mäkelä, Teemu; Meaney, Alexander; Kaasalainen, Touko; Savolainen, Sauli

    2018-01-01

    Background The high requirements for mammography image quality necessitate a systematic quality assurance process. Digital imaging allows automation of the image quality analysis, which can potentially improve repeatability and objectivity compared to a visual evaluation made by the users. Purpose To develop an automatic image quality analysis software for daily mammography quality control in a multi-unit imaging center. Material and Methods An automated image quality analysis software using the discrete wavelet transform and multiresolution analysis was developed for the American College of Radiology accreditation phantom. The software was validated by analyzing 60 randomly selected phantom images from six mammography systems and 20 phantom images with different dose levels from one mammography system. The results were compared to a visual analysis made by four reviewers. Additionally, long-term image quality trends of a full-field digital mammography system and a computed radiography mammography system were investigated. Results The automated software produced feature detection levels comparable to visual analysis. The agreement was good in the case of fibers, while the software detected somewhat more microcalcifications and characteristic masses. Long-term follow-up via a quality assurance web portal demonstrated the feasibility of using the software for monitoring the performance of mammography systems in a multi-unit imaging center. Conclusion Automated image quality analysis enables monitoring the performance of digital mammography systems in an efficient, centralized manner.

  6. Linear-constraint wavefront control for exoplanet coronagraphic imaging systems

    NASA Astrophysics Data System (ADS)

    Sun, He; Eldorado Riggs, A. J.; Kasdin, N. Jeremy; Vanderbei, Robert J.; Groff, Tyler Dean

    2017-01-01

    A coronagraph is a leading technology for achieving high-contrast imaging of exoplanets in a space telescope. It uses a system of several masks to modify the diffraction and achieve extremely high contrast in the image plane around target stars. However, coronagraphic imaging systems are very sensitive to optical aberrations, so wavefront correction using deformable mirrors (DMs) is necessary to avoid contrast degradation in the image plane. Electric field conjugation (EFC) and Stroke minimization (SM) are two primary high-contrast wavefront controllers explored in the past decade. EFC minimizes the average contrast in the search areas while regularizing the strength of the control inputs. Stroke minimization calculates the minimum DM commands under the constraint that a target average contrast is achieved. Recently in the High Contrast Imaging Lab at Princeton University (HCIL), a new linear-constraint wavefront controller based on stroke minimization was developed and demonstrated using numerical simulation. Instead of only constraining the average contrast over the entire search area, the new controller constrains the electric field of each single pixel using linear programming, which could led to significant increases in speed of the wavefront correction and also create more uniform dark holes. As a follow-up of this work, another linear-constraint controller modified from EFC is demonstrated theoretically and numerically and the lab verification of the linear-constraint controllers is reported. Based on the simulation and lab results, the pros and cons of linear-constraint controllers are carefully compared with EFC and stroke minimization.

  7. A multiparametric automatic method to monitor long-term reproducibility in digital mammography: results from a regional screening programme.

    PubMed

    Gennaro, G; Ballaminut, A; Contento, G

    2017-09-01

    This study aims to illustrate a multiparametric automatic method for monitoring long-term reproducibility of digital mammography systems, and its application on a large scale. Twenty-five digital mammography systems employed within a regional screening programme were controlled weekly using the same type of phantom, whose images were analysed by an automatic software tool. To assess system reproducibility levels, 15 image quality indices (IQIs) were extracted and compared with the corresponding indices previously determined by a baseline procedure. The coefficients of variation (COVs) of the IQIs were used to assess the overall variability. A total of 2553 phantom images were collected from the 25 digital mammography systems from March 2013 to December 2014. Most of the systems showed excellent image quality reproducibility over the surveillance interval, with mean variability below 5%. Variability of each IQI was 5%, with the exception of one index associated with the smallest phantom objects (0.25 mm), which was below 10%. The method applied for reproducibility tests-multi-detail phantoms, cloud automatic software tool to measure multiple image quality indices and statistical process control-was proven to be effective and applicable on a large scale and to any type of digital mammography system. • Reproducibility of mammography image quality should be monitored by appropriate quality controls. • Use of automatic software tools allows image quality evaluation by multiple indices. • System reproducibility can be assessed comparing current index value with baseline data. • Overall system reproducibility of modern digital mammography systems is excellent. • The method proposed and applied is cost-effective and easily scalable.

  8. Tracking target objects orbiting earth using satellite-based telescopes

    DOEpatents

    De Vries, Willem H; Olivier, Scot S; Pertica, Alexander J

    2014-10-14

    A system for tracking objects that are in earth orbit via a constellation or network of satellites having imaging devices is provided. An object tracking system includes a ground controller and, for each satellite in the constellation, an onboard controller. The ground controller receives ephemeris information for a target object and directs that ephemeris information be transmitted to the satellites. Each onboard controller receives ephemeris information for a target object, collects images of the target object based on the expected location of the target object at an expected time, identifies actual locations of the target object from the collected images, and identifies a next expected location at a next expected time based on the identified actual locations of the target object. The onboard controller processes the collected image to identify the actual location of the target object and transmits the actual location information to the ground controller.

  9. A single FPGA-based portable ultrasound imaging system for point-of-care applications.

    PubMed

    Kim, Gi-Duck; Yoon, Changhan; Kye, Sang-Bum; Lee, Youngbae; Kang, Jeeun; Yoo, Yangmo; Song, Tai-kyong

    2012-07-01

    We present a cost-effective portable ultrasound system based on a single field-programmable gate array (FPGA) for point-of-care applications. In the portable ultrasound system developed, all the ultrasound signal and image processing modules, including an effective 32-channel receive beamformer with pseudo-dynamic focusing, are embedded in an FPGA chip. For overall system control, a mobile processor running Linux at 667 MHz is used. The scan-converted ultrasound image data from the FPGA are directly transferred to the system controller via external direct memory access without a video processing unit. The potable ultrasound system developed can provide real-time B-mode imaging with a maximum frame rate of 30, and it has a battery life of approximately 1.5 h. These results indicate that the single FPGA-based portable ultrasound system developed is able to meet the processing requirements in medical ultrasound imaging while providing improved flexibility for adapting to emerging POC applications.

  10. Testing the quality of images for permanent magnet desktop MRI systems using specially designed phantoms.

    PubMed

    Qiu, Jianfeng; Wang, Guozhu; Min, Jiao; Wang, Xiaoyan; Wang, Pengcheng

    2013-12-21

    Our aim was to measure the performance of desktop magnetic resonance imaging (MRI) systems using specially designed phantoms, by testing imaging parameters and analysing the imaging quality. We designed multifunction phantoms with diameters of 18 and 60 mm for desktop MRI scanners in accordance with the American Association of Physicists in Medicine (AAPM) report no. 28. We scanned the phantoms with three permanent magnet 0.5 T desktop MRI systems, measured the MRI image parameters, and analysed imaging quality by comparing the data with the AAPM criteria and Chinese national standards. Image parameters included: resonance frequency, high contrast spatial resolution, low contrast object detectability, slice thickness, geometrical distortion, signal-to-noise ratio (SNR), and image uniformity. The image parameters of three desktop MRI machines could be measured using our specially designed phantoms, and most parameters were in line with MRI quality control criterion, including: resonance frequency, high contrast spatial resolution, low contrast object detectability, slice thickness, geometrical distortion, image uniformity and slice position accuracy. However, SNR was significantly lower than in some references. The imaging test and quality control are necessary for desktop MRI systems, and should be performed with the applicable phantom and corresponding standards.

  11. Wavefront sensorless adaptive optics ophthalmoscopy in the human eye

    PubMed Central

    Hofer, Heidi; Sredar, Nripun; Queener, Hope; Li, Chaohong; Porter, Jason

    2011-01-01

    Wavefront sensor noise and fidelity place a fundamental limit on achievable image quality in current adaptive optics ophthalmoscopes. Additionally, the wavefront sensor ‘beacon’ can interfere with visual experiments. We demonstrate real-time (25 Hz), wavefront sensorless adaptive optics imaging in the living human eye with image quality rivaling that of wavefront sensor based control in the same system. A stochastic parallel gradient descent algorithm directly optimized the mean intensity in retinal image frames acquired with a confocal adaptive optics scanning laser ophthalmoscope (AOSLO). When imaging through natural, undilated pupils, both control methods resulted in comparable mean image intensities. However, when imaging through dilated pupils, image intensity was generally higher following wavefront sensor-based control. Despite the typically reduced intensity, image contrast was higher, on average, with sensorless control. Wavefront sensorless control is a viable option for imaging the living human eye and future refinements of this technique may result in even greater optical gains. PMID:21934779

  12. Construction of multi-functional open modulized Matlab simulation toolbox for imaging ladar system

    NASA Astrophysics Data System (ADS)

    Wu, Long; Zhao, Yuan; Tang, Meng; He, Jiang; Zhang, Yong

    2011-06-01

    Ladar system simulation is to simulate the ladar models using computer simulation technology in order to predict the performance of the ladar system. This paper presents the developments of laser imaging radar simulation for domestic and overseas studies and the studies of computer simulation on ladar system with different application requests. The LadarSim and FOI-LadarSIM simulation facilities of Utah State University and Swedish Defence Research Agency are introduced in details. This paper presents the low level of simulation scale, un-unified design and applications of domestic researches in imaging ladar system simulation, which are mostly to achieve simple function simulation based on ranging equations for ladar systems. Design of laser imaging radar simulation with open and modularized structure is proposed to design unified modules for ladar system, laser emitter, atmosphere models, target models, signal receiver, parameters setting and system controller. Unified Matlab toolbox and standard control modules have been built with regulated input and output of the functions, and the communication protocols between hardware modules. A simulation based on ICCD gain-modulated imaging ladar system for a space shuttle is made based on the toolbox. The simulation result shows that the models and parameter settings of the Matlab toolbox are able to simulate the actual detection process precisely. The unified control module and pre-defined parameter settings simplify the simulation of imaging ladar detection. Its open structures enable the toolbox to be modified for specialized requests. The modulization gives simulations flexibility.

  13. Design of area array CCD image acquisition and display system based on FPGA

    NASA Astrophysics Data System (ADS)

    Li, Lei; Zhang, Ning; Li, Tianting; Pan, Yue; Dai, Yuming

    2014-09-01

    With the development of science and technology, CCD(Charge-coupled Device) has been widely applied in various fields and plays an important role in the modern sensing system, therefore researching a real-time image acquisition and display plan based on CCD device has great significance. This paper introduces an image data acquisition and display system of area array CCD based on FPGA. Several key technical challenges and problems of the system have also been analyzed and followed solutions put forward .The FPGA works as the core processing unit in the system that controls the integral time sequence .The ICX285AL area array CCD image sensor produced by SONY Corporation has been used in the system. The FPGA works to complete the driver of the area array CCD, then analog front end (AFE) processes the signal of the CCD image, including amplification, filtering, noise elimination, CDS correlation double sampling, etc. AD9945 produced by ADI Corporation to convert analog signal to digital signal. Developed Camera Link high-speed data transmission circuit, and completed the PC-end software design of the image acquisition, and realized the real-time display of images. The result through practical testing indicates that the system in the image acquisition and control is stable and reliable, and the indicators meet the actual project requirements.

  14. Medical devices; radiology devices; reclassification of full-field digital mammography system. Final rule.

    PubMed

    2010-11-05

    The Food and Drug Administration (FDA) is announcing the reclassification of the full-field digital mammography (FFDM) system from class III (premarket approval) to class II (special controls). The device type is intended to produce planar digital x-ray images of the entire breast; this generic type of device may include digital mammography acquisition software, full-field digital image receptor, acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and equipment supports, component parts, and accessories. The special control that will apply to the device is the guidance document entitled "Class II Special Controls Guidance Document: Full-Field Digital Mammography System." FDA is reclassifying the device into class II (special controls) because general controls along with special controls will provide a reasonable assurance of safety and effectiveness of the device. Elsewhere in this issue of the Federal Register, FDA is announcing the availability of the guidance document that will serve as the special control for this device.

  15. Night vision imaging system design, integration and verification in spacecraft vacuum thermal test

    NASA Astrophysics Data System (ADS)

    Shang, Yonghong; Wang, Jing; Gong, Zhe; Li, Xiyuan; Pei, Yifei; Bai, Tingzhu; Zhen, Haijing

    2015-08-01

    The purposes of spacecraft vacuum thermal test are to characterize the thermal control systems of the spacecraft and its component in its cruise configuration and to allow for early retirement of risks associated with mission-specific and novel thermal designs. The orbit heat flux is simulating by infrared lamp, infrared cage or electric heater. As infrared cage and electric heater do not emit visible light, or infrared lamp just emits limited visible light test, ordinary camera could not operate due to low luminous density in test. Moreover, some special instruments such as satellite-borne infrared sensors are sensitive to visible light and it couldn't compensate light during test. For improving the ability of fine monitoring on spacecraft and exhibition of test progress in condition of ultra-low luminous density, night vision imaging system is designed and integrated by BISEE. System is consist of high-gain image intensifier ICCD camera, assistant luminance system, glare protect system, thermal control system and computer control system. The multi-frame accumulation target detect technology is adopted for high quality image recognition in captive test. Optical system, mechanical system and electrical system are designed and integrated highly adaptable to vacuum environment. Molybdenum/Polyimide thin film electrical heater controls the temperature of ICCD camera. The results of performance validation test shown that system could operate under vacuum thermal environment of 1.33×10-3Pa vacuum degree and 100K shroud temperature in the space environment simulator, and its working temperature is maintains at 5° during two-day test. The night vision imaging system could obtain video quality of 60lp/mm resolving power.

  16. Video image position determination

    DOEpatents

    Christensen, Wynn; Anderson, Forrest L.; Kortegaard, Birchard L.

    1991-01-01

    An optical beam position controller in which a video camera captures an image of the beam in its video frames, and conveys those images to a processing board which calculates the centroid coordinates for the image. The image coordinates are used by motor controllers and stepper motors to position the beam in a predetermined alignment. In one embodiment, system noise, used in conjunction with Bernoulli trials, yields higher resolution centroid coordinates.

  17. System and method for magnetic current density imaging at ultra low magnetic fields

    DOEpatents

    Espy, Michelle A.; George, John Stevens; Kraus, Robert Henry; Magnelind, Per; Matlashov, Andrei Nikolaevich; Tucker, Don; Turovets, Sergei; Volegov, Petr Lvovich

    2016-02-09

    Preferred systems can include an electrical impedance tomography apparatus electrically connectable to an object; an ultra low field magnetic resonance imaging apparatus including a plurality of field directions and disposable about the object; a controller connected to the ultra low field magnetic resonance imaging apparatus and configured to implement a sequencing of one or more ultra low magnetic fields substantially along one or more of the plurality of field directions; and a display connected to the controller, and wherein the controller is further configured to reconstruct a displayable image of an electrical current density in the object. Preferred methods, apparatuses, and computer program products are also disclosed.

  18. Fast photoacoustic imaging system based on 320-element linear transducer array.

    PubMed

    Yin, Bangzheng; Xing, Da; Wang, Yi; Zeng, Yaguang; Tan, Yi; Chen, Qun

    2004-04-07

    A fast photoacoustic (PA) imaging system, based on a 320-transducer linear array, was developed and tested on a tissue phantom. To reconstruct a test tomographic image, 64 time-domain PA signals were acquired from a tissue phantom with embedded light-absorption targets. A signal acquisition was accomplished by utilizing 11 phase-controlled sub-arrays, each consisting of four transducers. The results show that the system can rapidly map the optical absorption of a tissue phantom and effectively detect the embedded light-absorbing target. By utilizing the multi-element linear transducer array and phase-controlled imaging algorithm, we thus can acquire PA tomography more efficiently, compared to other existing technology and algorithms. The methodology and equipment thus provide a rapid and reliable approach to PA imaging that may have potential applications in noninvasive imaging and clinic diagnosis.

  19. First Steps Toward Incorporating Image Based Diagnostics Into Particle Accelerator Control Systems Using Convolutional Neural Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edelen, A. L.; Biedron, S. G.; Milton, S. V.

    At present, a variety of image-based diagnostics are used in particle accelerator systems. Often times, these are viewed by a human operator who then makes appropriate adjustments to the machine. Given recent advances in using convolutional neural networks (CNNs) for image processing, it should be possible to use image diagnostics directly in control routines (NN-based or otherwise). This is especially appealing for non-intercepting diagnostics that could run continuously during beam operation. Here, we show results of a first step toward implementing such a controller: our trained CNN can predict multiple simulated downstream beam parameters at the Fermilab Accelerator Science andmore » Technology (FAST) facility's low energy beamline using simulated virtual cathode laser images, gun phases, and solenoid strengths.« less

  20. New simple and low-cost methods for periodic checks of Cyclone® Plus Storage Phosphor System.

    PubMed

    Edalucci, Elisabetta; Maffione, Anna Margherita; Fornasier, Maria Rosa; De Denaro, Mario; Scian, Giovanni; Dore, Franca; Rubello, Domenico

    2017-01-01

    The recent large use of the Cyclone® Plus Storage Phosphor System, especially in European countries, as imaging system for quantification of radiochemical purity of radiopharmaceuticals raised the problem of setting the periodic controls as required by European Legislation. We described simple, low-cost methods for Cyclone® Plus quality controls, which can be useful to evaluate the performance measurement of this imaging system.

  1. Generating a Double-Scroll Attractor by Connecting a Pair of Mutual Mirror-Image Attractors via Planar Switching Control

    NASA Astrophysics Data System (ADS)

    Sun, Changchun; Chen, Zhongtang; Xu, Qicheng

    2017-12-01

    An original three-dimensional (3D) smooth continuous chaotic system and its mirror-image system with eight common parameters are constructed and a pair of symmetric chaotic attractors can be generated simultaneously. Basic dynamical behaviors of two 3D chaotic systems are investigated respectively. A double-scroll chaotic attractor by connecting the pair of mutual mirror-image attractors is generated via a novel planar switching control approach. Chaos can also be controlled to a fixed point, a periodic orbit and a divergent orbit respectively by switching between two chaotic systems. Finally, an equivalent 3D chaotic system by combining two 3D chaotic systems with a switching law is designed by utilizing a sign function. Two circuit diagrams for realizing the double-scroll attractor are depicted by employing an improved module-based design approach.

  2. Image processing system performance prediction and product quality evaluation

    NASA Technical Reports Server (NTRS)

    Stein, E. K.; Hammill, H. B. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. A new technique for image processing system performance prediction and product quality evaluation was developed. It was entirely objective, quantitative, and general, and should prove useful in system design and quality control. The technique and its application to determination of quality control procedures for the Earth Resources Technology Satellite NASA Data Processing Facility are described.

  3. Techniques for High Contrast Imaging in Multi-Star Systems II: Multi-Star Wavefront Control

    NASA Technical Reports Server (NTRS)

    Sirbu, D.; Thomas, S.; Belikov, R.

    2017-01-01

    Direct imaging of exoplanets represents a challenge for astronomical instrumentation due to the high-contrast ratio and small angular separation between the host star and the faint planet. Multi-star systems pose additional challenges for coronagraphic instruments because of the diffraction and aberration leakage introduced by the additional stars, and as a result are not planned to be on direct imaging target lists. Multi-star wavefront control (MSWC) is a technique that uses a coronagraphic instrument's deformable mirror (DM) to create high-contrast regions in the focal plane in the presence of multiple stars. Our previous paper introduced the Super-Nyquist Wavefront Control (SNWC) technique that uses a diffraction grating to enable the DM to generate high-contrast regions beyond the nominal controllable region. These two techniques can be combined to generate high-contrast regions for multi-star systems at any angular separations. As a case study, a high-contrast wavefront control (WC) simulation that applies these techniques shows that the habitable region of the Alpha Centauri system can be imaged reaching 8 times 10(exp -9) mean contrast in 10 percent broadband light in one-sided dark holes from 1.6-5.5 lambda (wavelength) divided by D (distance).

  4. You can't touch this: touch-free navigation through radiological images.

    PubMed

    Ebert, Lars C; Hatch, Gary; Ampanozi, Garyfalia; Thali, Michael J; Ross, Steffen

    2012-09-01

    Keyboards, mice, and touch screens are a potential source of infection or contamination in operating rooms, intensive care units, and autopsy suites. The authors present a low-cost prototype of a system, which allows for touch-free control of a medical image viewer. This touch-free navigation system consists of a computer system (IMac, OS X 10.6 Apple, USA) with a medical image viewer (OsiriX, OsiriX foundation, Switzerland) and a depth camera (Kinect, Microsoft, USA). They implemented software that translates the data delivered by the camera and a voice recognition software into keyboard and mouse commands, which are then passed to OsiriX. In this feasibility study, the authors introduced 10 medical professionals to the system and asked them to re-create 12 images from a CT data set. They evaluated response times and usability of the system compared with standard mouse/keyboard control. Users felt comfortable with the system after approximately 10 minutes. Response time was 120 ms. Users required 1.4 times more time to re-create an image with gesture control. Users with OsiriX experience were significantly faster using the mouse/keyboard and faster than users without prior experience. They rated the system 3.4 out of 5 for ease of use in comparison to the mouse/keyboard. The touch-free, gesture-controlled system performs favorably and removes a potential vector for infection, protecting both patients and staff. Because the camera can be quickly and easily integrated into existing systems, requires no calibration, and is low cost, the barriers to using this technology are low.

  5. Graphics Processing Unit (GPU) implementation of image processing algorithms to improve system performance of the Control, Acquisition, Processing, and Image Display System (CAPIDS) of the Micro-Angiographic Fluoroscope (MAF).

    PubMed

    Vasan, S N Swetadri; Ionita, Ciprian N; Titus, A H; Cartwright, A N; Bednarek, D R; Rudin, S

    2012-02-23

    We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame.

  6. Application of AI techniques to a voice-actuated computer system for reconstructing and displaying magnetic resonance imaging data

    NASA Astrophysics Data System (ADS)

    Sherley, Patrick L.; Pujol, Alfonso, Jr.; Meadow, John S.

    1990-07-01

    To provide a means of rendering complex computer architectures languages and input/output modalities transparent to experienced and inexperienced users research is being conducted to develop a voice driven/voice response computer graphics imaging system. The system will be used for reconstructing and displaying computed tomography and magnetic resonance imaging scan data. In conjunction with this study an artificial intelligence (Al) control strategy was developed to interface the voice components and support software to the computer graphics functions implemented on the Sun Microsystems 4/280 color graphics workstation. Based on generated text and converted renditions of verbal utterances by the user the Al control strategy determines the user''s intent and develops and validates a plan. The program type and parameters within the plan are used as input to the graphics system for reconstructing and displaying medical image data corresponding to that perceived intent. If the plan is not valid the control strategy queries the user for additional information. The control strategy operates in a conversation mode and vocally provides system status reports. A detailed examination of the various AT techniques is presented with major emphasis being placed on their specific roles within the total control strategy structure. 1.

  7. Pointing and control system performance and improvement strategies for the SOFIA Airborne Telescope

    NASA Astrophysics Data System (ADS)

    Graf, Friederike; Reinacher, Andreas; Jakob, Holger; Lampater, Ulrich; Pfueller, Enrico; Wiedemann, Manuel; Wolf, Jürgen; Fasoulas, Stefanos

    2016-07-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) has already successfully conducted over 300 flights. In its early science phase, SOFIA's pointing requirements and especially the image jitter requirements of less than 1 arcsec rms have driven the design of the control system. Since the first observation flights, the image jitter has been gradually reduced by various control mechanisms. During smooth flight conditions, the current pointing and control system allows us to achieve the standards set for early science on SOFIA. However, the increasing demands on the image size require an image jitter of less than 0.4 arcsec rms during light turbulence to reach SOFIA's scientific goals. The major portion of the remaining image motion is caused by deformation and excitation of the telescope structure in a wide range of frequencies due to aircraft motion and aerodynamic and aeroacoustic effects. Therefore the so-called Flexible Body Compensation system (FBC) is used, a set of fixed-gain filters to counteract the structural bending and deformation. Thorough testing of the current system under various flight conditions has revealed a variety of opportunities for further improvements. The currently applied filters have solely been developed based on a FEM analysis. By implementing the inflight measurements in a simulation and optimization, an improved fixed-gain compensation method was identified. This paper will discuss promising results from various jitter measurements recorded with sampling frequencies of up to 400 Hz using the fast imaging tracking camera.

  8. Condenser optics, partial coherence, and imaging for soft-x-ray projection lithography.

    PubMed

    Sommargren, G E; Seppala, L G

    1993-12-01

    A condenser system couples the radiation source to an imaging system, controlling the uniformity and partial coherence at the object, which ultimately affects the characteristics of the aerial image. A soft-x-ray projection lithography system based on a ring-field imaging system and a laser-produced plasma x-ray source places considerable constraints on the design of a condenser system. Two designs are proposed, critical illumination and Köhler illumination, each of which requires three mirrors and scanning for covering the entire ring field with the required uniformity and partial coherence. Images based on Hopkins' formulation of partially coherent imaging are simulated.

  9. Operation of a Cartesian Robotic System in a Compact Microscope with Intelligent Controls

    NASA Technical Reports Server (NTRS)

    McDowell, Mark (Inventor)

    2006-01-01

    A Microscope Imaging System (CMIS) with intelligent controls is disclosed that provides techniques for scanning, identifying, detecting and tracking microscopic changes in selected characteristics or features of various surfaces including, but not limited to, cells, spheres, and manufactured products subject to difficult-to-see imperfections. The practice of the present invention provides applications that include colloidal hard spheres experiments, biological cell detection for patch clamping, cell movement and tracking, as well as defect identification in products, such as semiconductor devices, where surface damage can be significant, but difficult to detect. The CMIS system is a machine vision system, which combines intelligent image processing with remote control capabilities and provides the ability to autofocus on a microscope sample, automatically scan an image, and perform machine vision analysis on multiple samples simultaneously.

  10. Wavefront correction and high-resolution in vivo OCT imaging with an objective integrated multi-actuator adaptive lens

    PubMed Central

    Bonora, Stefano; Jian, Yifan; Zhang, Pengfei; Zam, Azhar; Pugh, Edward N.; Zawadzki, Robert J.; Sarunic, Marinko V.

    2015-01-01

    Adaptive optics is rapidly transforming microscopy and high-resolution ophthalmic imaging. The adaptive elements commonly used to control optical wavefronts are liquid crystal spatial light modulators and deformable mirrors. We introduce a novel Multi-actuator Adaptive Lens that can correct aberrations to high order, and which has the potential to increase the spread of adaptive optics to many new applications by simplifying its integration with existing systems. Our method combines an adaptive lens with an imaged-based optimization control that allows the correction of images to the diffraction limit, and provides a reduction of hardware complexity with respect to existing state-of-the-art adaptive optics systems. The Multi-actuator Adaptive Lens design that we present can correct wavefront aberrations up to the 4th order of the Zernike polynomial characterization. The performance of the Multi-actuator Adaptive Lens is demonstrated in a wide field microscope, using a Shack-Hartmann wavefront sensor for closed loop control. The Multi-actuator Adaptive Lens and image-based wavefront-sensorless control were also integrated into the objective of a Fourier Domain Optical Coherence Tomography system for in vivo imaging of mouse retinal structures. The experimental results demonstrate that the insertion of the Multi-actuator Objective Lens can generate arbitrary wavefronts to correct aberrations down to the diffraction limit, and can be easily integrated into optical systems to improve the quality of aberrated images. PMID:26368169

  11. Automating PACS quality control with the Vanderbilt image processing enterprise resource

    NASA Astrophysics Data System (ADS)

    Esparza, Michael L.; Welch, E. Brian; Landman, Bennett A.

    2012-02-01

    Precise image acquisition is an integral part of modern patient care and medical imaging research. Periodic quality control using standardized protocols and phantoms ensures that scanners are operating according to specifications, yet such procedures do not ensure that individual datasets are free from corruption; for example due to patient motion, transient interference, or physiological variability. If unacceptable artifacts are noticed during scanning, a technologist can repeat a procedure. Yet, substantial delays may be incurred if a problematic scan is not noticed until a radiologist reads the scans or an automated algorithm fails. Given scores of slices in typical three-dimensional scans and widevariety of potential use cases, a technologist cannot practically be expected inspect all images. In large-scale research, automated pipeline systems have had great success in achieving high throughput. However, clinical and institutional workflows are largely based on DICOM and PACS technologies; these systems are not readily compatible with research systems due to security and privacy restrictions. Hence, quantitative quality control has been relegated to individual investigators and too often neglected. Herein, we propose a scalable system, the Vanderbilt Image Processing Enterprise Resource (VIPER) to integrate modular quality control and image analysis routines with a standard PACS configuration. This server unifies image processing routines across an institutional level and provides a simple interface so that investigators can collaborate to deploy new analysis technologies. VIPER integrates with high performance computing environments has successfully analyzed all standard scans from our institutional research center over the course of the last 18 months.

  12. The Control Point Library Building System. [for Landsat MSS and RBV geometric image correction

    NASA Technical Reports Server (NTRS)

    Niblack, W.

    1981-01-01

    The Earth Resources Observation System (EROS) Data Center in Sioux Falls, South Dakota distributes precision corrected Landsat MSS and RBV data. These data are derived from master data tapes produced by the Master Data Processor (MDP), NASA's system for computing and applying corrections to the data. Included in the MDP is the Control Point Library Building System (CPLBS), an interactive, menu-driven system which permits a user to build and maintain libraries of control points. The control points are required to achieve the high geometric accuracy desired in the output MSS and RBV data. This paper describes the processing performed by CPLBS, the accuracy of the system, and the host computer and special image viewing equipment employed.

  13. Automated Visibility & Cloud Cover Measurements with a Solid State Imaging System

    DTIC Science & Technology

    1989-03-01

    GL-TR-89-0061 SIO Ref. 89-7 MPL-U-26/89 AUTOMATED VISIBILITY & CLOUD COVER MEASUREMENTS WITH A SOLID-STATE IMAGING SYSTEM C) to N4 R. W. Johnson W. S...include Security Classification) Automated Visibility & Cloud Measurements With A Solid State Imaging System 12. PERSONAL AUTHOR(S) Richard W. Johnson...based imaging systems , their ics and control algorithms, thus they ar.L discussed sepa- initial deployment and the preliminary application of rately

  14. Computer-Aided Diagnostic System For Mass Survey Chest Images

    NASA Astrophysics Data System (ADS)

    Yasuda, Yoshizumi; Kinoshita, Yasuhiro; Emori, Yasufumi; Yoshimura, Hitoshi

    1988-06-01

    In order to support screening of chest radiographs on mass survey, a computer-aided diagnostic system that automatically detects abnormality of candidate images using a digital image analysis technique has been developed. Extracting boundary lines of lung fields and examining their shapes allowed various kind of abnormalities to be detected. Correction and expansion were facilitated by describing the system control, image analysis control and judgement of abnormality in the rule type programing language. In the experiments using typical samples of student's radiograms, good results were obtained for the detection of abnormal shape of lung field, cardiac hypertrophy and scoliosis. As for the detection of diaphragmatic abnormality, relatively good results were obtained but further improvements will be necessary.

  15. Integrated calibration between digital camera and laser scanner from mobile mapping system for land vehicles

    NASA Astrophysics Data System (ADS)

    Zhao, Guihua; Chen, Hong; Li, Xingquan; Zou, Xiaoliang

    The paper presents the concept of lever arm and boresight angle, the design requirements of calibration sites and the integrated calibration method of boresight angles of digital camera or laser scanner. Taking test data collected by Applanix's LandMark system as an example, the camera calibration method is introduced to be piling three consecutive stereo images and OTF-Calibration method using ground control points. The laser calibration of boresight angle is proposed to use a manual and automatic method with ground control points. Integrated calibration between digital camera and laser scanner is introduced to improve the systemic precision of two sensors. By analyzing the measurement value between ground control points and its corresponding image points in sequence images, a conclusion is that position objects between camera and images are within about 15cm in relative errors and 20cm in absolute errors. By comparing the difference value between ground control points and its corresponding laser point clouds, the errors is less than 20cm. From achieved results of these experiments in analysis, mobile mapping system is efficient and reliable system for generating high-accuracy and high-density road spatial data more rapidly.

  16. Attitude Determination and Control System Design for a 6U Cube Sat for Proximity Operations and Rendezvous

    DTIC Science & Technology

    2014-08-04

    Resident Space Object Proximity Analysis and IMAging) mission is carried out by a 6U Cube Sat class satellite equipped with a warm gas propulsion system... mission . The ARAPAIMA (Application for Resident Space Object Proximity Analysis and IMAging) mission is carried out by a 6 U CubeSat class satellite...attitude determination and control subsystem (ADCS) (or a proximity operation and imaging satellite mission . The ARAP AI MA (Application for

  17. Image quality and radiation reduction of 320-row area detector CT coronary angiography with optimal tube voltage selection and an automatic exposure control system: comparison with body mass index-adapted protocol.

    PubMed

    Lim, Jiyeon; Park, Eun-Ah; Lee, Whal; Shim, Hackjoon; Chung, Jin Wook

    2015-06-01

    To assess the image quality and radiation exposure of 320-row area detector computed tomography (320-ADCT) coronary angiography with optimal tube voltage selection with the guidance of an automatic exposure control system in comparison with a body mass index (BMI)-adapted protocol. Twenty-two patients (study group) underwent 320-ADCT coronary angiography using an automatic exposure control system with the target standard deviation value of 33 as the image quality index and the lowest possible tube voltage. For comparison, a sex- and BMI-matched group (control group, n = 22) using a BMI-adapted protocol was established. Images of both groups were reconstructed by an iterative reconstruction algorithm. For objective evaluation of the image quality, image noise, vessel density, signal to noise ratio (SNR), and contrast to noise ratio (CNR) were measured. Two blinded readers then subjectively graded the image quality using a four-point scale (1: nondiagnostic to 4: excellent). Radiation exposure was also measured. Although the study group tended to show higher image noise (14.1 ± 3.6 vs. 9.3 ± 2.2 HU, P = 0.111) and higher vessel density (665.5 ± 161 vs. 498 ± 143 HU, P = 0.430) than the control group, the differences were not significant. There was no significant difference between the two groups for SNR (52.5 ± 19.2 vs. 60.6 ± 21.8, P = 0.729), CNR (57.0 ± 19.8 vs. 67.8 ± 23.3, P = 0.531), or subjective image quality scores (3.47 ± 0.55 vs. 3.59 ± 0.56, P = 0.960). However, radiation exposure was significantly reduced by 42 % in the study group (1.9 ± 0.8 vs. 3.6 ± 0.4 mSv, P = 0.003). Optimal tube voltage selection with the guidance of an automatic exposure control system in 320-ADCT coronary angiography allows substantial radiation reduction without significant impairment of image quality, compared to the results obtained using a BMI-based protocol.

  18. Image data-processing system for solar astronomy

    NASA Technical Reports Server (NTRS)

    Wilson, R. M.; Teuber, D. L.; Watkins, J. R.; Thomas, D. T.; Cooper, C. M.

    1977-01-01

    The paper describes an image data processing system (IDAPS), its hardware/software configuration, and interactive and batch modes of operation for the analysis of the Skylab/Apollo Telescope Mount S056 X-Ray Telescope experiment data. Interactive IDAPS is primarily designed to provide on-line interactive user control of image processing operations for image familiarization, sequence and parameter optimization, and selective feature extraction and analysis. Batch IDAPS follows the normal conventions of card control and data input and output, and is best suited where the desired parameters and sequence of operations are known and when long image-processing times are required. Particular attention is given to the way in which this system has been used in solar astronomy and other investigations. Some recent results obtained by means of IDAPS are presented.

  19. Closed Loop, DM Diversity-based, Wavefront Correction Algorithm for High Contrast Imaging Systems

    NASA Technical Reports Server (NTRS)

    Give'on, Amir; Belikov, Ruslan; Shaklan, Stuart; Kasdin, Jeremy

    2007-01-01

    High contrast imaging from space relies on coronagraphs to limit diffraction and a wavefront control systems to compensate for imperfections in both the telescope optics and the coronagraph. The extreme contrast required (up to 10(exp -10) for terrestrial planets) puts severe requirements on the wavefront control system, as the achievable contrast is limited by the quality of the wavefront. This paper presents a general closed loop correction algorithm for high contrast imaging coronagraphs by minimizing the energy in a predefined region in the image where terrestrial planets could be found. The estimation part of the algorithm reconstructs the complex field in the image plane using phase diversity caused by the deformable mirror. This method has been shown to achieve faster and better correction than classical speckle nulling.

  20. System and method for controlling a combustor assembly

    DOEpatents

    York, William David; Ziminsky, Willy Steve; Johnson, Thomas Edward; Stevenson, Christian Xavier

    2013-03-05

    A system and method for controlling a combustor assembly are disclosed. The system includes a combustor assembly. The combustor assembly includes a combustor and a fuel nozzle assembly. The combustor includes a casing. The fuel nozzle assembly is positioned at least partially within the casing and includes a fuel nozzle. The fuel nozzle assembly further defines a head end. The system further includes a viewing device configured for capturing an image of at least a portion of the head end, and a processor communicatively coupled to the viewing device, the processor configured to compare the image to a standard image for the head end.

  1. Composite video and graphics display for multiple camera viewing system in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1991-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  2. Composite video and graphics display for camera viewing systems in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1993-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  3. Image-based computer-assisted diagnosis system for benign paroxysmal positional vertigo

    NASA Astrophysics Data System (ADS)

    Kohigashi, Satoru; Nakamae, Koji; Fujioka, Hiromu

    2005-04-01

    We develop the image based computer assisted diagnosis system for benign paroxysmal positional vertigo (BPPV) that consists of the balance control system simulator, the 3D eye movement simulator, and the extraction method of nystagmus response directly from an eye movement image sequence. In the system, the causes and conditions of BPPV are estimated by searching the database for record matching with the nystagmus response for the observed eye image sequence of the patient with BPPV. The database includes the nystagmus responses for simulated eye movement sequences. The eye movement velocity is obtained by using the balance control system simulator that allows us to simulate BPPV under various conditions such as canalithiasis, cupulolithiasis, number of otoconia, otoconium size, and so on. Then the eye movement image sequence is displayed on the CRT by the 3D eye movement simulator. The nystagmus responses are extracted from the image sequence by the proposed method and are stored in the database. In order to enhance the diagnosis accuracy, the nystagmus response for a newly simulated sequence is matched with that for the observed sequence. From the matched simulation conditions, the causes and conditions of BPPV are estimated. We apply our image based computer assisted diagnosis system to two real eye movement image sequences for patients with BPPV to show its validity.

  4. Wireless-PDA-controlled image workflow from PACS: the next trend in the health care enterprise?

    NASA Astrophysics Data System (ADS)

    Erberich, Stephan G.; Documet, Jorge; Zhou, Michael Z.; Cao, Fei; Liu, Brent J.; Mogel, Greg T.; Huang, H. K.

    2003-05-01

    Image workflow in today's Picture Archiving and Communication Systems (PACS) is controlled from fixed Display Workstations (DW) using proprietary control interfaces. A remote access to the Hospital Information System (HIS) and Radiology Information System (RIS) for urgent patient information retrieval does not exist or gradually become available. The lack for remote access and workflow control for HIS and RIS is especially true when it comes to medical images of a PACS on Department or Hospital level. As images become more complex and data sizes expand rapidly with new image techniques like functional MRI, Mammography or routine spiral CT to name a few, the access and manageability becomes an important issue. Long image downloads or incomplete work lists cannot be tolerated in a busy health care environment. In addition, the domain of the PACS is no longer limited to the imaging department and PACS is also being used in the ER and emergency care units. Thus a prompt and secure access and manageability not only by the radiologist, but also from the physician becomes crucial to optimally utilize the PACS in the health care enterprise of the new millennium. The purpose of this paper is to introduce a concept and its implementation of a remote access and workflow control of the PACS combining wireless, Internet and Internet2 technologies. A wireless device, the Personal Digital Assistant (PDA), is used to communicate to a PACS web server that acts as a gateway controlling the commands for which the user has access to the PACS server. The commands implemented for this test-bed are query/retrieve of the patient list and study list including modality, examination, series and image selection and pushing any list items to a selected DW on the PACS network.

  5. System for verifiable CT radiation dose optimization based on image quality. part II. process control system.

    PubMed

    Larson, David B; Malarik, Remo J; Hall, Seth M; Podberesky, Daniel J

    2013-10-01

    To evaluate the effect of an automated computed tomography (CT) radiation dose optimization and process control system on the consistency of estimated image noise and size-specific dose estimates (SSDEs) of radiation in CT examinations of the chest, abdomen, and pelvis. This quality improvement project was determined not to constitute human subject research. An automated system was developed to analyze each examination immediately after completion, and to report individual axial-image-level and study-level summary data for patient size, image noise, and SSDE. The system acquired data for 4 months beginning October 1, 2011. Protocol changes were made by using parameters recommended by the prediction application, and 3 months of additional data were acquired. Preimplementation and postimplementation mean image noise and SSDE were compared by using unpaired t tests and F tests. Common-cause variation was differentiated from special-cause variation by using a statistical process control individual chart. A total of 817 CT examinations, 490 acquired before and 327 acquired after the initial protocol changes, were included in the study. Mean patient age and water-equivalent diameter were 12.0 years and 23.0 cm, respectively. The difference between actual and target noise increased from -1.4 to 0.3 HU (P < .01) and the standard deviation decreased from 3.9 to 1.6 HU (P < .01). Mean SSDE decreased from 11.9 to 7.5 mGy, a 37% reduction (P < .01). The process control chart identified several special causes of variation. Implementation of an automated CT radiation dose optimization system led to verifiable simultaneous decrease in image noise variation and SSDE. The automated nature of the system provides the opportunity for consistent CT radiation dose optimization on a broad scale. © RSNA, 2013.

  6. Excitation-scanning hyperspectral imaging system for microscopic and endoscopic applications

    NASA Astrophysics Data System (ADS)

    Mayes, Sam A.; Leavesley, Silas J.; Rich, Thomas C.

    2016-04-01

    Current microscopic and endoscopic technologies for cancer screening utilize white-light illumination sources. Hyper-spectral imaging has been shown to improve sensitivity while retaining specificity when compared to white-light imaging in both microscopy and in vivo imaging. However, hyperspectral imaging methods have historically suffered from slow acquisition times due to the narrow bandwidth of spectral filters. Often minutes are required to gather a full image stack. We have developed a novel approach called excitation-scanning hyperspectral imaging that provides 2-3 orders of magnitude increased signal strength. This reduces acquisition times significantly, allowing for live video acquisition. Here, we describe a preliminary prototype excitation-scanning hyperspectral imaging system that can be coupled with endoscopes or microscopes for hyperspectral imaging of tissues and cells. Our system is comprised of three subsystems: illumination, transmission, and imaging. The illumination subsystem employs light-emitting diode arrays to illuminate at different wavelengths. The transmission subsystem utilizes a unique geometry of optics and a liquid light guide. Software controls allow us to interface with and control the subsystems and components. Digital and analog signals are used to coordinate wavelength intensity, cycling and camera triggering. Testing of the system shows it can cycle 16 wavelengths at as fast as 1 ms per cycle. Additionally, more than 18% of the light transmits through the system. Our setup should allow for hyperspectral imaging of tissue and cells in real time.

  7. Automatic Welding System of Aluminum Pipe by Monitoring Backside Image of Molten Pool Using Vision Sensor

    NASA Astrophysics Data System (ADS)

    Baskoro, Ario Sunar; Kabutomori, Masashi; Suga, Yasuo

    An automatic welding system using Tungsten Inert Gas (TIG) welding with vision sensor for welding of aluminum pipe was constructed. This research studies the intelligent welding process of aluminum alloy pipe 6063S-T5 in fixed position and moving welding torch with the AC welding machine. The monitoring system consists of a vision sensor using a charge-coupled device (CCD) camera to monitor backside image of molten pool. The captured image was processed to recognize the edge of molten pool by image processing algorithm. Neural network model for welding speed control were constructed to perform the process automatically. From the experimental results it shows the effectiveness of the control system confirmed by good detection of molten pool and sound weld of experimental result.

  8. Survey of the prevalence and methodology of quality assurance for B-mode ultrasound image quality among veterinary sonographers.

    PubMed

    Hoscheit, Larry P; Heng, Hock Gan; Lim, Chee Kin; Weng, Hsin-Yi

    2018-05-01

    Image quality in B-mode ultrasound is important as it reflects the diagnostic accuracy and diagnostic information provided during clinical scanning. Quality assurance programs for B-mode ultrasound systems/components are comprised of initial quality acceptance testing and subsequent regularly scheduled quality control testing. The importance of quality assurance programs for B-mode ultrasound image quality using ultrasound phantoms is well documented in the human medical and medical physics literature. The purpose of this prospective, cross-sectional, survey study was to determine the prevalence and methodology of quality acceptance testing and quality control testing of image quality for ultrasound system/components among veterinary sonographers. An online electronic survey was sent to 1497 members of veterinary imaging organizations: the American College of Veterinary Radiology, the Veterinary Ultrasound Society, and the European Association of Veterinary Diagnostic Imaging, and a total of 167 responses were received. The results showed that the percentages of veterinary sonographers performing quality acceptance testing and quality control testing are 42% (64/151; 95% confidence interval 34-52%) and 26% (40/156: 95% confidence interval 19-33%) respectively. Of the respondents who claimed to have quality acceptance testing or quality control testing of image quality in place for their ultrasound system/components, 0% have performed quality acceptance testing or quality control testing correctly (quality acceptance testing 95% confidence interval: 0-6%, quality control testing 95% confidence interval: 0-11%). Further education and guidelines are recommended for veterinary sonographers in the area of quality acceptance testing and quality control testing for B-mode ultrasound equipment/components. © 2018 American College of Veterinary Radiology.

  9. Design of light-small high-speed image data processing system

    NASA Astrophysics Data System (ADS)

    Yang, Jinbao; Feng, Xue; Li, Fei

    2015-10-01

    A light-small high speed image data processing system was designed in order to meet the request of image data processing in aerospace. System was constructed of FPGA, DSP and MCU (Micro-controller), implementing a video compress of 3 million pixels@15frames and real-time return of compressed image to the upper system. Programmable characteristic of FPGA, high performance image compress IC and configurable MCU were made best use to improve integration. Besides, hard-soft board design was introduced and PCB layout was optimized. At last, system achieved miniaturization, light-weight and fast heat dispersion. Experiments show that, system's multifunction was designed correctly and worked stably. In conclusion, system can be widely used in the area of light-small imaging.

  10. Tracking scanning laser ophthalmoscope (TSLO)

    NASA Astrophysics Data System (ADS)

    Hammer, Daniel X.; Ferguson, R. Daniel; Magill, John C.; White, Michael A.; Elsner, Ann E.; Webb, Robert H.

    2003-07-01

    The effectiveness of image stabilization with a retinal tracker in a multi-function, compact scanning laser ophthalmoscope (TSLO) was demonstrated in initial human subject tests. The retinal tracking system uses a confocal reflectometer with a closed loop optical servo system to lock onto features in the fundus. The system is modular to allow configuration for many research and clinical applications, including hyperspectral imaging, multifocal electroretinography (MFERG), perimetry, quantification of macular and photo-pigmentation, imaging of neovascularization and other subretinal structures (drusen, hyper-, and hypo-pigmentation), and endogenous fluorescence imaging. Optical hardware features include dual wavelength imaging and detection, integrated monochromator, higher-order motion control, and a stimulus source. The system software consists of a real-time feedback control algorithm and a user interface. Software enhancements include automatic bias correction, asymmetric feature tracking, image averaging, automatic track re-lock, and acquisition and logging of uncompressed images and video files. Normal adult subjects were tested without mydriasis to optimize the tracking instrumentation and to characterize imaging performance. The retinal tracking system achieves a bandwidth of greater than 1 kHz, which permits tracking at rates that greatly exceed the maximum rate of motion of the human eye. The TSLO stabilized images in all test subjects during ordinary saccades up to 500 deg/sec with an inter-frame accuracy better than 0.05 deg. Feature lock was maintained for minutes despite subject eye blinking. Successful frame averaging allowed image acquisition with decreased noise in low-light applications. The retinal tracking system significantly enhances the imaging capabilities of the scanning laser ophthalmoscope.

  11. Networked vision system using a Prolog controller

    NASA Astrophysics Data System (ADS)

    Batchelor, B. G.; Caton, S. J.; Chatburn, L. T.; Crowther, R. A.; Miller, J. W. V.

    2005-11-01

    Prolog offers a very different style of programming compared to conventional languages; it can define object properties and abstract relationships in a way that Java, C, C++, etc. find awkward. In an accompanying paper, the authors describe how a distributed web-based vision systems can be built using elements that may even be located on different continents. One particular system of this general type is described here. The top-level controller is a Prolog program, which operates one, or more, image processing engines. This type of function is natural to Prolog, since it is able to reason logically using symbolic (non-numeric) data. Although Prolog is not suitable for programming image processing functions directly, it is ideal for analysing the results derived by an image processor. This article describes the implementation of two systems, in which a Prolog program controls several image processing engines, a simple robot, a pneumatic pick-and-place arm), LED illumination modules and a various mains-powered devices.

  12. Bluetooth based chaos synchronization using particle swarm optimization and its applications to image encryption.

    PubMed

    Yau, Her-Terng; Hung, Tzu-Hsiang; Hsieh, Chia-Chun

    2012-01-01

    This study used the complex dynamic characteristics of chaotic systems and Bluetooth to explore the topic of wireless chaotic communication secrecy and develop a communication security system. The PID controller for chaos synchronization control was applied, and the optimum parameters of this PID controller were obtained using a Particle Swarm Optimization (PSO) algorithm. Bluetooth was used to realize wireless transmissions, and a chaotic wireless communication security system was developed in the design concept of a chaotic communication security system. The experimental results show that this scheme can be used successfully in image encryption.

  13. Low-level processing for real-time image analysis

    NASA Technical Reports Server (NTRS)

    Eskenazi, R.; Wilf, J. M.

    1979-01-01

    A system that detects object outlines in television images in real time is described. A high-speed pipeline processor transforms the raw image into an edge map and a microprocessor, which is integrated into the system, clusters the edges, and represents them as chain codes. Image statistics, useful for higher level tasks such as pattern recognition, are computed by the microprocessor. Peak intensity and peak gradient values are extracted within a programmable window and are used for iris and focus control. The algorithms implemented in hardware and the pipeline processor architecture are described. The strategy for partitioning functions in the pipeline was chosen to make the implementation modular. The microprocessor interface allows flexible and adaptive control of the feature extraction process. The software algorithms for clustering edge segments, creating chain codes, and computing image statistics are also discussed. A strategy for real time image analysis that uses this system is given.

  14. High-speed multi-frame dynamic transmission electron microscope image acquisition system with arbitrary timing

    DOEpatents

    Reed, Bryan W.; DeHope, William J.; Huete, Glenn; LaGrange, Thomas B.; Shuttlesworth, Richard M.

    2016-02-23

    An electron microscope is disclosed which has a laser-driven photocathode and an arbitrary waveform generator (AWG) laser system ("laser"). The laser produces a train of temporally-shaped laser pulses each being of a programmable pulse duration, and directs the laser pulses to the laser-driven photocathode to produce a train of electron pulses. An image sensor is used along with a deflector subsystem. The deflector subsystem is arranged downstream of the target but upstream of the image sensor, and has a plurality of plates. A control system having a digital sequencer controls the laser and a plurality of switching components, synchronized with the laser, to independently control excitation of each one of the deflector plates. This allows each electron pulse to be directed to a different portion of the image sensor, as well as to enable programmable pulse durations and programmable inter-pulse spacings.

  15. High-speed multiframe dynamic transmission electron microscope image acquisition system with arbitrary timing

    DOEpatents

    Reed, Bryan W.; DeHope, William J.; Huete, Glenn; LaGrange, Thomas B.; Shuttlesworth, Richard M.

    2015-10-20

    An electron microscope is disclosed which has a laser-driven photocathode and an arbitrary waveform generator (AWG) laser system ("laser"). The laser produces a train of temporally-shaped laser pulses of a predefined pulse duration and waveform, and directs the laser pulses to the laser-driven photocathode to produce a train of electron pulses. An image sensor is used along with a deflector subsystem. The deflector subsystem is arranged downstream of the target but upstream of the image sensor, and has two pairs of plates arranged perpendicular to one another. A control system controls the laser and a plurality of switching components synchronized with the laser, to independently control excitation of each one of the deflector plates. This allows each electron pulse to be directed to a different portion of the image sensor, as well as to be provided with an independently set duration and independently set inter-pulse spacings.

  16. High-speed multiframe dynamic transmission electron microscope image acquisition system with arbitrary timing

    DOEpatents

    Reed, Bryan W.; Dehope, William J; Huete, Glenn; LaGrange, Thomas B.; Shuttlesworth, Richard M

    2016-06-21

    An electron microscope is disclosed which has a laser-driven photocathode and an arbitrary waveform generator (AWG) laser system ("laser"). The laser produces a train of temporally-shaped laser pulses of a predefined pulse duration and waveform, and directs the laser pulses to the laser-driven photocathode to produce a train of electron pulses. An image sensor is used along with a deflector subsystem. The deflector subsystem is arranged downstream of the target but upstream of the image sensor, and has two pairs of plates arranged perpendicular to one another. A control system controls the laser and a plurality of switching components synchronized with the laser, to independently control excitation of each one of the deflector plates. This allows each electron pulse to be directed to a different portion of the image sensor, as well as to be provided with an independently set duration and independently set inter-pulse spacings.

  17. Foucault imaging by using non-dedicated transmission electron microscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taniguchi, Yoshifumi; Matsumoto, Hiroaki; Harada, Ken

    2012-08-27

    An electron optical system for observing Foucault images was constructed using a conventional transmission electron microscope without any special equipment for Lorentz microscopy. The objective lens was switched off and an electron beam was converged by a condenser optical system to the crossover on the selected area aperture plane. The selected area aperture was used as an objective aperture to select the deflected beam for Foucault mode, and the successive image-forming lenses were controlled for observation of the specimen images. The irradiation area on the specimen was controlled by selecting the appropriate diameter of the condenser aperture.

  18. Sensory Interactive Teleoperator Robotic Grasping

    NASA Technical Reports Server (NTRS)

    Alark, Keli; Lumia, Ron

    1997-01-01

    As the technological world strives for efficiency, the need for economical equipment that increases operator proficiency in minimal time is fundamental. This system links a CCD camera, a controller and a robotic arm to a computer vision system to provide an alternative method of image analysis. The machine vision system which was employed possesses software tools for acquiring and analyzing images which are received through a CCD camera. After feature extraction on the object in the image was performed, information about the object's location, orientation and distance from the robotic gripper is sent to the robot controller so that the robot can manipulate the object.

  19. Mapping invasive weeds and their control with spatial information technologies

    USDA-ARS?s Scientific Manuscript database

    We discuss applications of airborne multispectral digital imaging systems, imaging processing techniques, global positioning systems (GPS), and geographic information systems (GIS) for mapping the invasive weeds giant salvinia (Salvinia molesta) and Brazilian pepper (Schinus terebinthifolius) and fo...

  20. Image processing and computer controls for video profile diagnostic system in the ground test accelerator (GTA)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wright, R.M.; Zander, M.E.; Brown, S.K.

    1992-09-01

    This paper describes the application of video image processing to beam profile measurements on the Ground Test Accelerator (GTA). A diagnostic was needed to measure beam profiles in the intermediate matching section (IMS) between the radio-frequency quadrupole (RFQ) and the drift tube linac (DTL). Beam profiles are measured by injecting puffs of gas into the beam. The light emitted from the beam-gas interaction is captured and processed by a video image processing system, generating the beam profile data. A general purpose, modular and flexible video image processing system, imagetool, was used for the GTA image profile measurement. The development ofmore » both software and hardware for imagetool and its integration with the GTA control system (GTACS) will be discussed. The software includes specialized algorithms for analyzing data and calibrating the system. The underlying design philosophy of imagetool was tested by the experience of building and using the system, pointing the way for future improvements. The current status of the system will be illustrated by samples of experimental data.« less

  1. Image processing and computer controls for video profile diagnostic system in the ground test accelerator (GTA)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wright, R.M.; Zander, M.E.; Brown, S.K.

    1992-01-01

    This paper describes the application of video image processing to beam profile measurements on the Ground Test Accelerator (GTA). A diagnostic was needed to measure beam profiles in the intermediate matching section (IMS) between the radio-frequency quadrupole (RFQ) and the drift tube linac (DTL). Beam profiles are measured by injecting puffs of gas into the beam. The light emitted from the beam-gas interaction is captured and processed by a video image processing system, generating the beam profile data. A general purpose, modular and flexible video image processing system, imagetool, was used for the GTA image profile measurement. The development ofmore » both software and hardware for imagetool and its integration with the GTA control system (GTACS) will be discussed. The software includes specialized algorithms for analyzing data and calibrating the system. The underlying design philosophy of imagetool was tested by the experience of building and using the system, pointing the way for future improvements. The current status of the system will be illustrated by samples of experimental data.« less

  2. Novel Image Quality Control Systems(Add-On). Innovative Computational Methods for Inverse Problems in Optical and SAR Imaging

    DTIC Science & Technology

    2007-02-28

    Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex Medium Response, International Journal of Imaging Systems and...1767-1782, 2006. 31. Z. Mu, R. Plemmons, and P. Santago. Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex...rigorous mathematical and computational research on inverse problems in optical imaging of direct interest to the Army and also the intelligence agencies

  3. Apparatus for monitoring crystal growth

    DOEpatents

    Sachs, Emanual M.

    1981-01-01

    A system and method are disclosed for monitoring the growth of a crystalline body from a liquid meniscus in a furnace. The system provides an improved human/machine interface so as to reduce operator stress, strain and fatigue while improving the conditions for observation and control of the growing process. The system comprises suitable optics for forming an image of the meniscus and body wherein the image is anamorphic so that the entire meniscus can be viewed with good resolution in both the width and height dimensions. The system also comprises a video display for displaying the anamorphic image. The video display includes means for enhancing the contrast between any two contrasting points in the image. The video display also comprises a signal averager for averaging the intensity of at least one preselected portions of the image. The value of the average intensity, can in turn be utilized to control the growth of the body. The system and method are also capable of observing and monitoring multiple processes.

  4. Method of monitoring crystal growth

    DOEpatents

    Sachs, Emanual M.

    1982-01-01

    A system and method are disclosed for monitoring the growth of a crystalline body from a liquid meniscus in a furnace. The system provides an improved human/machine interface so as to reduce operator stress, strain and fatigue while improving the conditions for observation and control of the growing process. The system comprises suitable optics for forming an image of the meniscus and body wherein the image is anamorphic so that the entire meniscus can be viewed with good resolution in both the width and height dimensions. The system also comprises a video display for displaying the anamorphic image. The video display includes means for enhancing the contrast between any two contrasting points in the image. The video display also comprises a signal averager for averaging the intensity of at least one preselected portions of the image. The value of the average intensity, can in turn be utilized to control the growth of the body. The system and method are also capable of observing and monitoring multiple processes.

  5. Evaluation of EIT system performance.

    PubMed

    Yasin, Mamatjan; Böhm, Stephan; Gaggero, Pascal O; Adler, Andy

    2011-07-01

    An electrical impedance tomography (EIT) system images internal conductivity from surface electrical stimulation and measurement. Such systems necessarily comprise multiple design choices from cables and hardware design to calibration and image reconstruction. In order to compare EIT systems and study the consequences of changes in system performance, this paper describes a systematic approach to evaluate the performance of the EIT systems. The system to be tested is connected to a saline phantom in which calibrated contrasting test objects are systematically positioned using a position controller. A set of evaluation parameters are proposed which characterize (i) data and image noise, (ii) data accuracy, (iii) detectability of single contrasts and distinguishability of multiple contrasts, and (iv) accuracy of reconstructed image (amplitude, resolution, position and ringing). Using this approach, we evaluate three different EIT systems and illustrate the use of these tools to evaluate and compare performance. In order to facilitate the use of this approach, all details of the phantom, test objects and position controller design are made publicly available including the source code of the evaluation and reporting software.

  6. Small SWAP 3D imaging flash ladar for small tactical unmanned air systems

    NASA Astrophysics Data System (ADS)

    Bird, Alan; Anderson, Scott A.; Wojcik, Michael; Budge, Scott E.

    2015-05-01

    The Space Dynamics Laboratory (SDL), working with Naval Research Laboratory (NRL) and industry leaders Advanced Scientific Concepts (ASC) and Hood Technology Corporation, has developed a small SWAP (size, weight, and power) 3D imaging flash ladar (LAser Detection And Ranging) sensor system concept design for small tactical unmanned air systems (STUAS). The design utilizes an ASC 3D flash ladar camera and laser in a Hood Technology gyro-stabilized gimbal system. The design is an autonomous, intelligent, geo-aware sensor system that supplies real-time 3D terrain and target images. Flash ladar and visible camera data are processed at the sensor using a custom digitizer/frame grabber with compression. Mounted in the aft housing are power, controls, processing computers, and GPS/INS. The onboard processor controls pointing and handles image data, detection algorithms and queuing. The small SWAP 3D imaging flash ladar sensor system generates georeferenced terrain and target images with a low probability of false return and <10 cm range accuracy through foliage in real-time. The 3D imaging flash ladar is designed for a STUAS with a complete system SWAP estimate of <9 kg, <0.2 m3 and <350 W power. The system is modeled using LadarSIM, a MATLAB® and Simulink®- based ladar system simulator designed and developed by the Center for Advanced Imaging Ladar (CAIL) at Utah State University. We will present the concept design and modeled performance predictions.

  7. Computer-Aided Remote Driving

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian H.

    1994-01-01

    System for remote control of robotic land vehicle requires only small radio-communication bandwidth. Twin video cameras on vehicle create stereoscopic images. Operator views cross-polarized images on two cathode-ray tubes through correspondingly polarized spectacles. By use of cursor on frozen image, remote operator designates path. Vehicle proceeds to follow path, by use of limited degree of autonomous control to cope with unexpected conditions. System concept, called "computer-aided remote driving" (CARD), potentially useful in exploration of other planets, military surveillance, firefighting, and clean-up of hazardous materials.

  8. A new concept for medical imaging centered on cellular phone technology.

    PubMed

    Granot, Yair; Ivorra, Antoni; Rubinsky, Boris

    2008-04-30

    According to World Health Organization reports, some three quarters of the world population does not have access to medical imaging. In addition, in developing countries over 50% of medical equipment that is available is not being used because it is too sophisticated or in disrepair or because the health personnel are not trained to use it. The goal of this study is to introduce and demonstrate the feasibility of a new concept in medical imaging that is centered on cellular phone technology and which may provide a solution to medical imaging in underserved areas. The new system replaces the conventional stand-alone medical imaging device with a new medical imaging system made of two independent components connected through cellular phone technology. The independent units are: a) a data acquisition device (DAD) at a remote patient site that is simple, with limited controls and no image display capability and b) an advanced image reconstruction and hardware control multiserver unit at a central site. The cellular phone technology transmits unprocessed raw data from the patient site DAD and receives and displays the processed image from the central site. (This is different from conventional telemedicine where the image reconstruction and control is at the patient site and telecommunication is used to transmit processed images from the patient site). The primary goal of this study is to demonstrate that the cellular phone technology can function in the proposed mode. The feasibility of the concept is demonstrated using a new frequency division multiplexing electrical impedance tomography system, which we have developed for dynamic medical imaging, as the medical imaging modality. The system is used to image through a cellular phone a simulation of breast cancer tumors in a medical imaging diagnostic mode and to image minimally invasive tissue ablation with irreversible electroporation in a medical imaging interventional mode.

  9. Transmission-Type 2-Bit Programmable Metasurface for Single-Sensor and Single-Frequency Microwave Imaging

    PubMed Central

    Li, Yun Bo; Li, Lian Lin; Xu, Bai Bing; Wu, Wei; Wu, Rui Yuan; Wan, Xiang; Cheng, Qiang; Cui, Tie Jun

    2016-01-01

    The programmable and digital metamaterials or metasurfaces presented recently have huge potentials in designing real-time-controlled electromagnetic devices. Here, we propose the first transmission-type 2-bit programmable coding metasurface for single-sensor and single- frequency imaging in the microwave frequency. Compared with the existing single-sensor imagers composed of active spatial modulators with their units controlled independently, we introduce randomly programmable metasurface to transform the masks of modulators, in which their rows and columns are controlled simultaneously so that the complexity and cost of the imaging system can be reduced drastically. Different from the single-sensor approach using the frequency agility, the proposed imaging system makes use of variable modulators under single frequency, which can avoid the object dispersion. In order to realize the transmission-type 2-bit programmable metasurface, we propose a two-layer binary coding unit, which is convenient for changing the voltages in rows and columns to switch the diodes in the top and bottom layers, respectively. In our imaging measurements, we generate the random codes by computer to achieve different transmission patterns, which can support enough multiple modes to solve the inverse-scattering problem in the single-sensor imaging. Simple experimental results are presented in the microwave frequency, validating our new single-sensor and single-frequency imaging system. PMID:27025907

  10. Transmission-Type 2-Bit Programmable Metasurface for Single-Sensor and Single-Frequency Microwave Imaging.

    PubMed

    Li, Yun Bo; Li, Lian Lin; Xu, Bai Bing; Wu, Wei; Wu, Rui Yuan; Wan, Xiang; Cheng, Qiang; Cui, Tie Jun

    2016-03-30

    The programmable and digital metamaterials or metasurfaces presented recently have huge potentials in designing real-time-controlled electromagnetic devices. Here, we propose the first transmission-type 2-bit programmable coding metasurface for single-sensor and single- frequency imaging in the microwave frequency. Compared with the existing single-sensor imagers composed of active spatial modulators with their units controlled independently, we introduce randomly programmable metasurface to transform the masks of modulators, in which their rows and columns are controlled simultaneously so that the complexity and cost of the imaging system can be reduced drastically. Different from the single-sensor approach using the frequency agility, the proposed imaging system makes use of variable modulators under single frequency, which can avoid the object dispersion. In order to realize the transmission-type 2-bit programmable metasurface, we propose a two-layer binary coding unit, which is convenient for changing the voltages in rows and columns to switch the diodes in the top and bottom layers, respectively. In our imaging measurements, we generate the random codes by computer to achieve different transmission patterns, which can support enough multiple modes to solve the inverse-scattering problem in the single-sensor imaging. Simple experimental results are presented in the microwave frequency, validating our new single-sensor and single-frequency imaging system.

  11. Omniview motionless camera orientation system

    NASA Technical Reports Server (NTRS)

    Martin, H. Lee (Inventor); Kuban, Daniel P. (Inventor); Zimmermann, Steven D. (Inventor); Busko, Nicholas (Inventor)

    2010-01-01

    An apparatus and method is provided for converting digital images for use in an imaging system. The apparatus includes a data memory which stores digital data representing an image having a circular or spherical field of view such as an image captured by a fish-eye lens, a control input for receiving a signal for selecting a portion of the image, and a converter responsive to the control input for converting digital data corresponding to the selected portion into digital data representing a planar image for subsequent display. Various methods include the steps of storing digital data representing an image having a circular or spherical field of view, selecting a portion of the image, and converting the stored digital data corresponding to the selected portion into digital data representing a planar image for subsequent display. In various embodiments, the data converter and data conversion step may use an orthogonal set of transformation algorithms.

  12. Development of a contrast phantom for active millimeter-wave imaging systems

    NASA Astrophysics Data System (ADS)

    Barber, Jeffrey; Weatherall, James C.; Brauer, Carolyn S.; Smith, Barry T.

    2011-06-01

    As the development of active millimeter wave imaging systems continues, it is necessary to validate materials that simulate the expected response of explosives. While physics-based models have been used to develop simulants, it is desirable to image both the explosive and simulant together in a controlled fashion in order to demonstrate success. To this end, a millimeter wave contrast phantom has been created to calibrate image grayscale while controlling the configuration of the explosive and simulant such that direct comparison of their respective returns can be performed. The physics of the phantom are described, with millimeter wave images presented to show successful development of the phantom and simulant validation at GHz frequencies.

  13. Hapten-derivatized nanoparticle targeting and imaging of gene expression by multimodality imaging systems.

    PubMed

    Cheng, C-M; Chu, P-Y; Chuang, K-H; Roffler, S R; Kao, C-H; Tseng, W-L; Shiea, J; Chang, W-D; Su, Y-C; Chen, B-M; Wang, Y-M; Cheng, T-L

    2009-01-01

    Non-invasive gene monitoring is important for most gene therapy applications to ensure selective gene transfer to specific cells or tissues. We developed a non-invasive imaging system to assess the location and persistence of gene expression by anchoring an anti-dansyl (DNS) single-chain antibody (DNS receptor) on the cell surface to trap DNS-derivatized imaging probes. DNS hapten was covalently attached to cross-linked iron oxide (CLIO) to form a 39+/-0.5 nm DNS-CLIO nanoparticle imaging probe. DNS-CLIO specifically bound to DNS receptors but not to a control single-chain antibody receptor. DNS-CLIO (100 microM Fe) was non-toxic to both B16/DNS (DNS receptor positive) and B16/phOx (control receptor positive) cells. Magnetic resonance (MR) imaging could detect as few as 10% B16/DNS cells in a mixture in vitro. Importantly, DNS-CLIO specifically bound to a B16/DNS tumor, which markedly reduced signal intensity. Similar results were also shown with DNS quantum dots, which specifically targeted CT26/DNS cells but not control CT26/phOx cells both in vitro and in vivo. These results demonstrate that DNS nanoparticles can systemically monitor the expression of DNS receptor in vivo by feasible imaging systems. This targeting strategy may provide a valuable tool to estimate the efficacy and specificity of different gene delivery systems and optimize gene therapy protocols in the clinic.

  14. Informatics in radiology: Intuitive user interface for 3D image manipulation using augmented reality and a smartphone as a remote control.

    PubMed

    Nakata, Norio; Suzuki, Naoki; Hattori, Asaki; Hirai, Naoya; Miyamoto, Yukio; Fukuda, Kunihiko

    2012-01-01

    Although widely used as a pointing device on personal computers (PCs), the mouse was originally designed for control of two-dimensional (2D) cursor movement and is not suited to complex three-dimensional (3D) image manipulation. Augmented reality (AR) is a field of computer science that involves combining the physical world and an interactive 3D virtual world; it represents a new 3D user interface (UI) paradigm. A system for 3D and four-dimensional (4D) image manipulation has been developed that uses optical tracking AR integrated with a smartphone remote control. The smartphone is placed in a hard case (jacket) with a 2D printed fiducial marker for AR on the back. It is connected to a conventional PC with an embedded Web camera by means of WiFi. The touch screen UI of the smartphone is then used as a remote control for 3D and 4D image manipulation. Using this system, the radiologist can easily manipulate 3D and 4D images from computed tomography and magnetic resonance imaging in an AR environment with high-quality image resolution. Pilot assessment of this system suggests that radiologists will be able to manipulate 3D and 4D images in the reading room in the near future. Supplemental material available at http://radiographics.rsna.org/lookup/suppl/doi:10.1148/rg.324115086/-/DC1.

  15. GOES I/M image navigation and registration

    NASA Technical Reports Server (NTRS)

    Fiorello, J. L., Jr.; Oh, I. H.; Kelly, K. A.; Ranne, L.

    1989-01-01

    Image Navigation and Registration (INR) is the system that will be used on future Geostationary Operational Environmental Satellite (GOES) missions to locate and register radiometric imagery data. It consists of a semiclosed loop system with a ground-based segment that generates coefficients to perform image motion compensation (IMC). The IMC coefficients are uplinked to the satellite-based segment, where they are used to adjust the displacement of the imagery data due to movement of the imaging instrument line-of-sight. The flight dynamics aspects of the INR system is discussed in terms of the attitude and orbit determination, attitude pointing, and attitude and orbit control needed to perform INR. The modeling used in the determination of orbit and attitude is discussed, along with the method of on-orbit control used in the INR system, and various factors that affect stability. Also discussed are potential error sources inherent in the INR system and the operational methods of compensating for these errors.

  16. Welding Penetration Control of Fixed Pipe in TIG Welding Using Fuzzy Inference System

    NASA Astrophysics Data System (ADS)

    Baskoro, Ario Sunar; Kabutomori, Masashi; Suga, Yasuo

    This paper presents a study on welding penetration control of fixed pipe in Tungsten Inert Gas (TIG) welding using fuzzy inference system. The welding penetration control is essential to the production quality welds with a specified geometry. For pipe welding using constant arc current and welding speed, the bead width becomes wider as the circumferential welding of small diameter pipes progresses. Having welded pipe in fixed position, obviously, the excessive arc current yields burn through of metals; in contrary, insufficient arc current produces imperfect welding. In order to avoid these errors and to obtain the uniform weld bead over the entire circumference of the pipe, the welding conditions should be controlled as the welding proceeds. This research studies the intelligent welding process of aluminum alloy pipe 6063S-T5 in fixed position using the AC welding machine. The monitoring system used a charge-coupled device (CCD) camera to monitor backside image of molten pool. The captured image was processed to recognize the edge of molten pool by image processing algorithm. Simulation of welding control using fuzzy inference system was constructed to simulate the welding control process. The simulation result shows that fuzzy controller was suitable for controlling the welding speed and appropriate to be implemented into the welding system. A series of experiments was conducted to evaluate the performance of the fuzzy controller. The experimental results show the effectiveness of the control system that is confirmed by sound welds.

  17. A networked modular hardware and software system for MRI-guided robotic prostate interventions

    NASA Astrophysics Data System (ADS)

    Su, Hao; Shang, Weijian; Harrington, Kevin; Camilo, Alex; Cole, Gregory; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare; Fischer, Gregory S.

    2012-02-01

    Magnetic resonance imaging (MRI) provides high resolution multi-parametric imaging, large soft tissue contrast, and interactive image updates making it an ideal modality for diagnosing prostate cancer and guiding surgical tools. Despite a substantial armamentarium of apparatuses and systems has been developed to assist surgical diagnosis and therapy for MRI-guided procedures over last decade, the unified method to develop high fidelity robotic systems in terms of accuracy, dynamic performance, size, robustness and modularity, to work inside close-bore MRI scanner still remains a challenge. In this work, we develop and evaluate an integrated modular hardware and software system to support the surgical workflow of intra-operative MRI, with percutaneous prostate intervention as an illustrative case. Specifically, the distinct apparatuses and methods include: 1) a robot controller system for precision closed loop control of piezoelectric motors, 2) a robot control interface software that connects the 3D Slicer navigation software and the robot controller to exchange robot commands and coordinates using the OpenIGTLink open network communication protocol, and 3) MRI scan plane alignment to the planned path and imaging of the needle as it is inserted into the target location. A preliminary experiment with ex-vivo phantom validates the system workflow, MRI-compatibility and shows that the robotic system has a better than 0.01mm positioning accuracy.

  18. A Control System and Streaming DAQ Platform with Image-Based Trigger for X-ray Imaging

    NASA Astrophysics Data System (ADS)

    Stevanovic, Uros; Caselle, Michele; Cecilia, Angelica; Chilingaryan, Suren; Farago, Tomas; Gasilov, Sergey; Herth, Armin; Kopmann, Andreas; Vogelgesang, Matthias; Balzer, Matthias; Baumbach, Tilo; Weber, Marc

    2015-06-01

    High-speed X-ray imaging applications play a crucial role for non-destructive investigations of the dynamics in material science and biology. On-line data analysis is necessary for quality assurance and data-driven feedback, leading to a more efficient use of a beam time and increased data quality. In this article we present a smart camera platform with embedded Field Programmable Gate Array (FPGA) processing that is able to stream and process data continuously in real-time. The setup consists of a Complementary Metal-Oxide-Semiconductor (CMOS) sensor, an FPGA readout card, and a readout computer. It is seamlessly integrated in a new custom experiment control system called Concert that provides a more efficient way of operating a beamline by integrating device control, experiment process control, and data analysis. The potential of the embedded processing is demonstrated by implementing an image-based trigger. It records the temporal evolution of physical events with increased speed while maintaining the full field of view. The complete data acquisition system, with Concert and the smart camera platform was successfully integrated and used for fast X-ray imaging experiments at KIT's synchrotron radiation facility ANKA.

  19. The imaging system design of three-line LMCCD mapping camera

    NASA Astrophysics Data System (ADS)

    Zhou, Huai-de; Liu, Jin-Guo; Wu, Xing-Xing; Lv, Shi-Liang; Zhao, Ying; Yu, Da

    2011-08-01

    In this paper, the authors introduced the theory about LMCCD (line-matrix CCD) mapping camera firstly. On top of the introduction were consists of the imaging system of LMCCD mapping camera. Secondly, some pivotal designs which were Introduced about the imaging system, such as the design of focal plane module, the video signal's procession, the controller's design of the imaging system, synchronous photography about forward and nadir and backward camera and the nadir camera of line-matrix CCD. At last, the test results of LMCCD mapping camera imaging system were introduced. The results as following: the precision of synchronous photography about forward and nadir and backward camera is better than 4 ns and the nadir camera of line-matrix CCD is better than 4 ns too; the photography interval of line-matrix CCD of the nadir camera can satisfy the butter requirements of LMCCD focal plane module; the SNR tested in laboratory is better than 95 under typical working condition(the solar incidence degree is 30, the reflectivity of the earth's surface is 0.3) of each CCD image; the temperature of the focal plane module is controlled under 30° in a working period of 15 minutes. All of these results can satisfy the requirements about the synchronous photography, the temperature control of focal plane module and SNR, Which give the guarantee of precision for satellite photogrammetry.

  20. SMS engineering design report

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The engineering design for the Shuttle Missions Simulator is presented in sections, with each section representing a subsystem development activity. Subsystems covered include: electrical power system; mechanical power system; main propellant and external tank; solid rocket booster; reaction control system; orbital maneuvering system; guidance, navigation, and control; data processing system; mission control center interface; and image display system.

  1. A Wireless Capsule Endoscope System With Low-Power Controlling and Processing ASIC.

    PubMed

    Xinkai Chen; Xiaoyu Zhang; Linwei Zhang; Xiaowen Li; Nan Qi; Hanjun Jiang; Zhihua Wang

    2009-02-01

    This paper presents the design of a wireless capsule endoscope system. The proposed system is mainly composed of a CMOS image sensor, a RF transceiver and a low-power controlling and processing application specific integrated circuit (ASIC). Several design challenges involving system power reduction, system miniaturization and wireless wake-up method are resolved by employing optimized system architecture, integration of an area and power efficient image compression module, a power management unit (PMU) and a novel wireless wake-up subsystem with zero standby current in the ASIC design. The ASIC has been fabricated in 0.18-mum CMOS technology with a die area of 3.4 mm * 3.3 mm. The digital baseband can work under a power supply down to 0.95 V with a power dissipation of 1.3 mW. The prototype capsule based on the ASIC and a data recorder has been developed. Test result shows that proposed system architecture with local image compression lead to an average of 45% energy reduction for transmitting an image frame.

  2. Integrated command, control, communication and computation system design study. Summary of tasks performed

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A summary of tasks performed on an integrated command, control, communication, and computation system design study is given. The Tracking and Data Relay Satellite System command and control system study, an automated real-time operations study, and image processing work are discussed.

  3. Display And Analysis Of Tomographic Volumetric Images Utilizing A Vari-Focal Mirror

    NASA Astrophysics Data System (ADS)

    Harris, L. D.; Camp, J. J.

    1984-10-01

    A system for the three-dimensional (3-D) display and analysis of stacks of tomographic images is described. The device utilizes the principle of a variable focal (vari-focal) length optical element in the form of an aluminized membrane stretched over a loudspeaker to generate a virtual 3-D image which is a visible representation of a 3-D array of image elements (voxels). The system displays 500,000 voxels per mirror cycle in a 3-D raster which appears continuous and demonstrates no distracting artifacts. The display is bright enough so that portions of the image can be dimmed without compromising the number of shades of gray. For x-ray CT, a displayed volume image looks like a 3-D radiograph which appears to be in the space directly behind the mirror. The viewer sees new views by moving his/her head from side to side or up and down. The system facilitates a variety of operator interactive functions which allow the user to point at objects within the image, control the orientation and location of brightened oblique planes within the volume, numerically dissect away selected image regions, and control intensity window levels. Photographs of example volume images displayed on the system illustrate, to the degree possible in a flat picture, the nature of displayed images and the capabilities of the system. Preliminary application of the display device to the analysis of volume reconstructions obtained from the Dynamic Spatial Reconstructor indicates significant utility of the system in selecting oblique sections and gaining an appreciation of the shape and dimensions of complex organ systems.

  4. Hard real-time beam scheduler enables adaptive images in multi-probe systems

    NASA Astrophysics Data System (ADS)

    Tobias, Richard J.

    2014-03-01

    Real-time embedded-system concepts were adapted to allow an imaging system to responsively control the firing of multiple probes. Large-volume, operator-independent (LVOI) imaging would increase the diagnostic utility of ultrasound. An obstacle to this innovation is the inability of current systems to drive multiple transducers dynamically. Commercial systems schedule scanning with static lists of beams to be fired and processed; here we allow an imager to adapt to changing beam schedule demands, as an intelligent response to incoming image data. An example of scheduling changes is demonstrated with a flexible duplex mode two-transducer application mimicking LVOI imaging. Embedded-system concepts allow an imager to responsively control the firing of multiple probes. Operating systems use powerful dynamic scheduling algorithms, such as fixed priority preemptive scheduling. Even real-time operating systems lack the timing constraints required for ultrasound. Particularly for Doppler modes, events must be scheduled with sub-nanosecond precision, and acquired data is useless without this requirement. A successful scheduler needs unique characteristics. To get close to what would be needed in LVOI imaging, we show two transducers scanning different parts of a subjects leg. When one transducer notices flow in a region where their scans overlap, the system reschedules the other transducer to start flow mode and alter its beams to get a view of the observed vessel and produce a flow measurement. The second transducer does this in a focused region only. This demonstrates key attributes of a successful LVOI system, such as robustness against obstructions and adaptive self-correction.

  5. Design of rapid prototype of UAV line-of-sight stabilized control system

    NASA Astrophysics Data System (ADS)

    Huang, Gang; Zhao, Liting; Li, Yinlong; Yu, Fei; Lin, Zhe

    2018-01-01

    The line-of-sight (LOS) stable platform is the most important technology of UAV (unmanned aerial vehicle), which can reduce the effect to imaging quality from vibration and maneuvering of the aircraft. According to the requirement of LOS stability system (inertial and optical-mechanical combined method) and UAV's structure, a rapid prototype is designed using based on industrial computer using Peripheral Component Interconnect (PCI) and Windows RTX to exchange information. The paper shows the control structure, and circuit system including the inertial stability control circuit with gyro and voice coil motor driven circuit, the optical-mechanical stability control circuit with fast-steering-mirror (FSM) driven circuit and image-deviation-obtained system, outer frame rotary follower, and information-exchange system on PC. Test results show the stability accuracy reaches 5μrad, and prove the effectiveness of the combined line-of-sight stabilization control system, and the real-time rapid prototype runs stable.

  6. How much image noise can be added in cardiac x-ray imaging without loss in perceived image quality?

    NASA Astrophysics Data System (ADS)

    Gislason-Lee, Amber J.; Kumcu, Asli; Kengyelics, Stephen M.; Rhodes, Laura A.; Davies, Andrew G.

    2015-03-01

    Dynamic X-ray imaging systems are used for interventional cardiac procedures to treat coronary heart disease. X-ray settings are controlled automatically by specially-designed X-ray dose control mechanisms whose role is to ensure an adequate level of image quality is maintained with an acceptable radiation dose to the patient. Current commonplace dose control designs quantify image quality by performing a simple technical measurement directly from the image. However, the utility of cardiac X-ray images is in their interpretation by a cardiologist during an interventional procedure, rather than in a technical measurement. With the long term goal of devising a clinically-relevant image quality metric for an intelligent dose control system, we aim to investigate the relationship of image noise with clinical professionals' perception of dynamic image sequences. Computer-generated noise was added, in incremental amounts, to angiograms of five different patients selected to represent the range of adult cardiac patient sizes. A two alternative forced choice staircase experiment was used to determine the amount of noise which can be added to a patient image sequences without changing image quality as perceived by clinical professionals. Twenty-five viewing sessions (five for each patient) were completed by thirteen observers. Results demonstrated scope to increase the noise of cardiac X-ray images by up to 21% +/- 8% before it is noticeable by clinical professionals. This indicates a potential for 21% radiation dose reduction since X-ray image noise and radiation dose are directly related; this would be beneficial to both patients and personnel.

  7. An initial trial of a prototype telepathology system featuring static imaging with discrete control of the remote microscope.

    PubMed

    Winokur, T S; McClellan, S; Siegal, G P; Reddy, V; Listinsky, C M; Conner, D; Goldman, J; Grimes, G; Vaughn, G; McDonald, J M

    1998-07-01

    Routine diagnosis of pathology images transmitted over telecommunications lines remains an elusive goal. Part of the resistance stems from the difficulty of enabling image selection by the remote pathologist. To address this problem, a telepathology microscope system (TelePath, TeleMedicine Solutions, Birmingham, Ala) that has features associated with static and dynamic imaging systems was constructed. Features of the system include near real time image transmission, provision of a tiled overview image, free choice of any fields at any desired optical magnification, and automated tracking of the pathologist's image selection. All commands and images are discrete, avoiding many inherent problems of full motion video and continuous remote control. A set of 64 slides was reviewed by 3 pathologists in a simulated frozen section environment. Each pathologist provided diagnoses for all 64 slides, as well as qualitative information about the system. Thirty-one of 192 diagnoses disagreed with the reference diagnosis that had been reached before the trial began. Qf the 31, 13 were deferrals and 12 were diagnoses of cases that had a deferral as the reference diagnosis. In 6 cases, the diagnosis disagreed with the reference diagnosis yielding an overall accuracy of 96.9%. Confidence levels in the diagnoses were high. This trial suggests that this system provides high-quality anatomic pathology services, including intraoperative diagnoses, over telecommunications lines.

  8. Visual perception system and method for a humanoid robot

    NASA Technical Reports Server (NTRS)

    Chelian, Suhas E. (Inventor); Linn, Douglas Martin (Inventor); Wampler, II, Charles W. (Inventor); Bridgwater, Lyndon (Inventor); Wells, James W. (Inventor); Mc Kay, Neil David (Inventor)

    2012-01-01

    A robotic system includes a humanoid robot with robotic joints each moveable using an actuator(s), and a distributed controller for controlling the movement of each of the robotic joints. The controller includes a visual perception module (VPM) for visually identifying and tracking an object in the field of view of the robot under threshold lighting conditions. The VPM includes optical devices for collecting an image of the object, a positional extraction device, and a host machine having an algorithm for processing the image and positional information. The algorithm visually identifies and tracks the object, and automatically adapts an exposure time of the optical devices to prevent feature data loss of the image under the threshold lighting conditions. A method of identifying and tracking the object includes collecting the image, extracting positional information of the object, and automatically adapting the exposure time to thereby prevent feature data loss of the image.

  9. Multisite two-photon imaging of neurons on multielectrode arrays

    NASA Astrophysics Data System (ADS)

    Potter, Steve M.; Lukina, Natalia; Longmuir, Kenneth J.; Wu, Yan

    2001-04-01

    We wish to understand how neural systems store, recall, and process information. We are using cultured networks of cortical neurons grown on microelectrode arrays as a model system for studying the emergent properties of ensembles of living neurons. We have developed a 2-way communication interface between the cultured network and a computer- generated animal, the Neurally Controlled Animat. Neural activity is used to control the behavior of the Animat, and 2- photon time-lapse imaging is carried out in order to observe the morphological changes that might underlie changes in neural processing. The 2-photon microscope is ideal for repeated imaging over hours or days, with submicron resolution and little photodamage. We have designed a computer-controlled microscope stage that allows imaging several locations in sequence, in order to collect more image data. For the latest progress, see: http://www.caltech.edu/~pinelab/PotterGroup.htm.

  10. Imaging arrangement and microscope

    DOEpatents

    Pertsinidis, Alexandros; Chu, Steven

    2015-12-15

    An embodiment of the present invention is an imaging arrangement that includes imaging optics, a fiducial light source, and a control system. In operation, the imaging optics separate light into first and second tight by wavelength and project the first and second light onto first and second areas within first and second detector regions, respectively. The imaging optics separate fiducial light from the fiducial light source into first and second fiducial light and project the first and second fiducial light onto third and fourth areas within the first and second detector regions, respectively. The control system adjusts alignment of the imaging optics so that the first and second fiducial light projected onto the first and second detector regions maintain relatively constant positions within the first and second detector regions, respectively. Another embodiment of the present invention is a microscope that includes the imaging arrangement.

  11. Novel imaging closed loop control strategy for heliostats

    NASA Astrophysics Data System (ADS)

    Bern, Gregor; Schöttl, Peter; Heimsath, Anna; Nitz, Peter

    2017-06-01

    Central Receiver Systems use up to thousands of heliostats to concentrate solar radiation. The precise control of heliostat aiming points is crucial not only for efficiency but also for reliable plant operation. Besides the calibration of open loop control systems, closed loop tracking strategies are developed to address a precise and efficient aiming strategy. The need for cost reductions in the heliostat field intensifies the motivation for economic closed loop control systems. This work introduces an approach for a closed loop heliostat tracking strategy using image analysis and signal modulation. The approach aims at the extraction of heliostat focal spot position within the receiver domain by means of a centralized remote vision system decoupled from the rough conditions close to the focal area. Taking an image sequence of the receiver while modulating a signal on different heliostats, their aiming points are retrieved. The work describes the methodology and shows first results from simulations and practical tests performed in small scale, motivating further investigation and deployment.

  12. Wave-Optics Analysis of Pupil Imaging

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.; Bos, Brent J.

    2006-01-01

    Pupil imaging performance is analyzed from the perspective of physical optics. A multi-plane diffraction model is constructed by propagating the scalar electromagnetic field, surface by surface, along the optical path comprising the pupil imaging optical system. Modeling results are compared with pupil images collected in the laboratory. The experimental setup, although generic for pupil imaging systems in general, has application to the James Webb Space Telescope (JWST) optical system characterization where the pupil images are used as a constraint to the wavefront sensing and control process. Practical design considerations follow from the diffraction modeling which are discussed in the context of the JWST Observatory.

  13. Three-dimensional image display system using stereogram and holographic optical memory techniques

    NASA Astrophysics Data System (ADS)

    Kim, Cheol S.; Kim, Jung G.; Shin, Chang-Mok; Kim, Soo-Joong

    2001-09-01

    In this paper, we implemented a three dimensional image display system using stereogram and holographic optical memory techniques which can store many images and reconstruct them automatically. In this system, to store and reconstruct stereo images, incident angle of reference beam must be controlled in real time, so we used BPH (binary phase hologram) and LCD (liquid crystal display) for controlling reference beam. And input images are represented on the LCD without polarizer/analyzer for maintaining uniform beam intensities regardless of the brightness of input images. The input images and BPHs are edited using application software with having the same recording scheduled time interval in storing. The reconstructed stereo images are acquired by capturing the output images with CCD camera at the behind of the analyzer which transforms phase information into brightness information of images. The reference beams are acquired by Fourier transform of BPH which designed with SA (simulated annealing) algorithm, and represented on the LCD with the 0.05 seconds time interval using application software for reconstructing the stereo images. In output plane, we used a LCD shutter that is synchronized to a monitor that displays alternate left and right eye images for depth perception. We demonstrated optical experiment which store and reconstruct four stereo images in BaTiO3 repeatedly using holographic optical memory techniques.

  14. Functional mesoporous silica nanoparticles for bio-imaging applications.

    PubMed

    Cha, Bong Geun; Kim, Jaeyun

    2018-03-22

    Biomedical investigations using mesoporous silica nanoparticles (MSNs) have received significant attention because of their unique properties including controllable mesoporous structure, high specific surface area, large pore volume, and tunable particle size. These unique features make MSNs suitable for simultaneous diagnosis and therapy with unique advantages to encapsulate and load a variety of therapeutic agents, deliver these agents to the desired location, and release the drugs in a controlled manner. Among various clinical areas, nanomaterials-based bio-imaging techniques have advanced rapidly with the development of diverse functional nanoparticles. Due to the unique features of MSNs, an imaging agent supported by MSNs can be a promising system for developing targeted bio-imaging contrast agents with high structural stability and enhanced functionality that enable imaging of various modalities. Here, we review the recent achievements on the development of functional MSNs for bio-imaging applications, including optical imaging, magnetic resonance imaging (MRI), positron emission tomography (PET), computed tomography (CT), ultrasound imaging, and multimodal imaging for early diagnosis. With further improvement in noninvasive bio-imaging techniques, the MSN-supported imaging agent systems are expected to contribute to clinical applications in the future. This article is categorized under: Diagnostic Tools > In vivo Nanodiagnostics and Imaging Nanotechnology Approaches to Biology > Nanoscale Systems in Biology. © 2018 Wiley Periodicals, Inc.

  15. A Feedfordward Adaptive Controller to Reduce the Imaging Time of Large-Sized Biological Samples with a SPM-Based Multiprobe Station

    PubMed Central

    Otero, Jorge; Guerrero, Hector; Gonzalez, Laura; Puig-Vidal, Manel

    2012-01-01

    The time required to image large samples is an important limiting factor in SPM-based systems. In multiprobe setups, especially when working with biological samples, this drawback can make impossible to conduct certain experiments. In this work, we present a feedfordward controller based on bang-bang and adaptive controls. The controls are based in the difference between the maximum speeds that can be used for imaging depending on the flatness of the sample zone. Topographic images of Escherichia coli bacteria samples were acquired using the implemented controllers. Results show that to go faster in the flat zones, rather than using a constant scanning speed for the whole image, speeds up the imaging process of large samples by up to a 4× factor. PMID:22368491

  16. Controlling system for smart hyper-spectral imaging array based on liquid-crystal Fabry-Perot device

    NASA Astrophysics Data System (ADS)

    Jiang, Xue; Chen, Xin; Rong, Xin; Liu, Kan; Zhang, Xinyu; Ji, An; Xie, Changsheng

    2011-11-01

    A research for developing a kind of smart spectral imaging detection technique based on the electrically tunable liquidcrystal (LC) FP structure is launched. It has some advantages of low cost, highly compact integration, perfuming wavelength selection without moving any micro-mirror of FP device, and the higher reliability and stability. The controlling system for hyper-spectral imaging array based on LC-FP device includes mainly a MSP430F5438 as its core. Considering the characteristics of LC-FP device, the controlling system can provide a driving signal of 1-10 kHz and 0- 30Vrms for the device in a static driving mode. This paper introduces the hardware designing of the control system in detail. It presents an overall hardware solutions including: (1) the MSP430 controlling circuit, and (2) the operational amplifier circuit, and (3) the power supply circuit, and (4) the AD conversion circuit. The techniques for the realization of special high speed digital circuits, which is necessary for the PCB employed, is also discussed.

  17. Visual systemizing preference in children with autism: A randomized controlled trial of intranasal oxytocin.

    PubMed

    Strathearn, Lane; Kim, Sohye; Bastian, D Anthony; Jung, Jennifer; Iyengar, Udita; Martinez, Sheila; Goin-Kochel, Robin P; Fonagy, Peter

    2018-05-01

    Several studies have suggested that the neuropeptide oxytocin may enhance aspects of social communication in autism. Little is known, however, about its effects on nonsocial manifestations, such as restricted interests and repetitive behaviors. In the empathizing-systemizing theory of autism, social deficits are described along the continuum of empathizing ability, whereas nonsocial aspects are characterized in terms of an increased preference for patterned or rule-based systems, called systemizing. We therefore developed an automated eye-tracking task to test whether children and adolescents with autism spectrum disorder (ASD) compared to matched controls display a visual preference for more highly organized and structured (systemized) real-life images. Then, as part of a randomized, double-blind, placebo-controlled crossover study, we examined the effect of intranasal oxytocin on systemizing preferences in 16 male children with ASD, compared with 16 matched controls. Participants viewed 14 slides, each containing four related pictures (e.g., of people, animals, scenes, or objects) that differed primarily on the degree of systemizing. Visual systemizing preference was defined in terms of the fixation time and count for each image. Unlike control subjects who showed no gaze preference, individuals with ASD preferred to fixate on more highly systemized pictures. Intranasal oxytocin eliminated this preference in ASD participants, who now showed a similar response to control subjects on placebo. In contrast, control participants increased their visual preference for more systemized images after receiving oxytocin versus placebo. These results suggest that, in addition to its effects on social communication, oxytocin may play a role in some of the nonsocial manifestations of autism.

  18. Design Method of Digital Optimal Control Scheme and Multiple Paralleled Bridge Type Current Amplifier for Generating Gradient Magnetic Fields in MRI Systems

    NASA Astrophysics Data System (ADS)

    Watanabe, Shuji; Takano, Hiroshi; Fukuda, Hiroya; Hiraki, Eiji; Nakaoka, Mutsuo

    This paper deals with a digital control scheme of multiple paralleled high frequency switching current amplifier with four-quadrant chopper for generating gradient magnetic fields in MRI (Magnetic Resonance Imaging) systems. In order to track high precise current pattern in Gradient Coils (GC), the proposal current amplifier cancels the switching current ripples in GC with each other and designed optimum switching gate pulse patterns without influences of the large filter current ripple amplitude. The optimal control implementation and the linear control theory in GC current amplifiers have affinity to each other with excellent characteristics. The digital control system can be realized easily through the digital control implementation, DSPs or microprocessors. Multiple-parallel operational microprocessors realize two or higher paralleled GC current pattern tracking amplifier with optimal control design and excellent results are given for improving the image quality of MRI systems.

  19. Next Generation Image-Based Phenotyping of Root System Architecture

    NASA Astrophysics Data System (ADS)

    Davis, T. W.; Shaw, N. M.; Cheng, H.; Larson, B. G.; Craft, E. J.; Shaff, J. E.; Schneider, D. J.; Piñeros, M. A.; Kochian, L. V.

    2016-12-01

    The development of the Plant Root Imaging and Data Acquisition (PRIDA) hardware/software system enables researchers to collect digital images, along with all the relevant experimental details, of a range of hydroponically grown agricultural crop roots for 2D and 3D trait analysis. Previous efforts of image-based root phenotyping focused on young cereals, such as rice; however, there is a growing need to measure both older and larger root systems, such as those of maize and sorghum, to improve our understanding of the underlying genetics that control favorable rooting traits for plant breeding programs to combat the agricultural risks presented by climate change. Therefore, a larger imaging apparatus has been prototyped for capturing 3D root architecture with an adaptive control system and innovative plant root growth media that retains three-dimensional root architectural features. New publicly available multi-platform software has been released with considerations for both high throughput (e.g., 3D imaging of a single root system in under ten minutes) and high portability (e.g., support for the Raspberry Pi computer). The software features unified data collection, management, exploration and preservation for continued trait and genetics analysis of root system architecture. The new system makes data acquisition efficient and includes features that address the needs of researchers and technicians, such as reduced imaging time, semi-automated camera calibration with uncertainty characterization, and safe storage of the critical experimental data.

  20. Imaging System For Measuring Macromolecule Crystal Growth Rates in Microgravity

    NASA Technical Reports Server (NTRS)

    Corder, Eric L.; Briscoe, Jeri

    2004-01-01

    In order to determine how macromolecule crystal quality improvement in microgravity is related to crystal growth characteristics, a team of scientists and engineers at NASA's Marshal Space Flight Center (MSFC) developed flight hardware capable of measuring the crystal growth rates of a population of crystals growing under the same conditions. As crystal growth rate is defined as the change or delta in a defined dimension or length (L) of crystal over time, the hardware was named Delta-L. Delta-L consists of three sub assemblies: a fluid unit including a temperature-controlled growth cell, an imaging unit, and a control unit (consisting of a Data Acquisition and Control Unit (DACU), and a thermal control unit). Delta-L will be used in connection with the Glovebox Integrated Microgravity Isolation Technology (g-LIMIT) inside the Microgravity Science Glovebox (MSG), onboard the International Space Station. This paper will describe the Delta-L imaging system. The Delta-L imaging system was designed to locate, resolve, and capture images of up to 10 individual crystals ranging in size from 10 to 500 microns with a point-to-point accuracy of +/- 2.0 microns within a quartz growth cell observation area of 20 mm x 10 mm x 1 mm. The optical imaging system is comprised of a video microscope camera mounted on computer controlled translation stages. The 3-axis translation stages and control units provide crewmembers the ability to search throughout the growth cell observation area for crystals forming in size of approximately 10 microns. Once the crewmember has selected ten crystals of interest, the growth of these crystals is tracked until the size reaches approximately 500 microns. In order to resolve these crystals an optical system with a magnification of 10X was designed. A black and white NTSC camera was utilized with a 20X microscope objective and a 0.5X custom designed relay lens with an inline light to meet the magnification requirement. The design allows a 500 pm crystal to be viewed in the vertical dimension on a standard NTSC monitor (4:3 aspect ratio). Images of the 10 crystals are collected periodically and stored in sets by the DACU.

  1. A LabVIEW Platform for Preclinical Imaging Using Digital Subtraction Angiography and Micro-CT.

    PubMed

    Badea, Cristian T; Hedlund, Laurence W; Johnson, G Allan

    2013-01-01

    CT and digital subtraction angiography (DSA) are ubiquitous in the clinic. Their preclinical equivalents are valuable imaging methods for studying disease models and treatment. We have developed a dual source/detector X-ray imaging system that we have used for both micro-CT and DSA studies in rodents. The control of such a complex imaging system requires substantial software development for which we use the graphical language LabVIEW (National Instruments, Austin, TX, USA). This paper focuses on a LabVIEW platform that we have developed to enable anatomical and functional imaging with micro-CT and DSA. Our LabVIEW applications integrate and control all the elements of our system including a dual source/detector X-ray system, a mechanical ventilator, a physiological monitor, and a power microinjector for the vascular delivery of X-ray contrast agents. Various applications allow cardiac- and respiratory-gated acquisitions for both DSA and micro-CT studies. Our results illustrate the application of DSA for cardiopulmonary studies and vascular imaging of the liver and coronary arteries. We also show how DSA can be used for functional imaging of the kidney. Finally, the power of 4D micro-CT imaging using both prospective and retrospective gating is shown for cardiac imaging.

  2. A LabVIEW Platform for Preclinical Imaging Using Digital Subtraction Angiography and Micro-CT

    PubMed Central

    Badea, Cristian T.; Hedlund, Laurence W.; Johnson, G. Allan

    2013-01-01

    CT and digital subtraction angiography (DSA) are ubiquitous in the clinic. Their preclinical equivalents are valuable imaging methods for studying disease models and treatment. We have developed a dual source/detector X-ray imaging system that we have used for both micro-CT and DSA studies in rodents. The control of such a complex imaging system requires substantial software development for which we use the graphical language LabVIEW (National Instruments, Austin, TX, USA). This paper focuses on a LabVIEW platform that we have developed to enable anatomical and functional imaging with micro-CT and DSA. Our LabVIEW applications integrate and control all the elements of our system including a dual source/detector X-ray system, a mechanical ventilator, a physiological monitor, and a power microinjector for the vascular delivery of X-ray contrast agents. Various applications allow cardiac- and respiratory-gated acquisitions for both DSA and micro-CT studies. Our results illustrate the application of DSA for cardiopulmonary studies and vascular imaging of the liver and coronary arteries. We also show how DSA can be used for functional imaging of the kidney. Finally, the power of 4D micro-CT imaging using both prospective and retrospective gating is shown for cardiac imaging. PMID:27006920

  3. Security middleware infrastructure for DICOM images in health information systems.

    PubMed

    Kallepalli, Vijay N V; Ehikioya, Sylvanus A; Camorlinga, Sergio; Rueda, Jose A

    2003-12-01

    In health care, it is mandatory to maintain the privacy and confidentiality of medical data. To achieve this, a fine-grained access control and an access log for accessing medical images are two important aspects that need to be considered in health care systems. Fine-grained access control provides access to medical data only to authorized persons based on priority, location, and content. A log captures each attempt to access medical data. This article describes an overall middleware infrastructure required for secure access to Digital Imaging and Communication in Medicine (DICOM) images, with an emphasis on access control and log maintenance. We introduce a hybrid access control model that combines the properties of two existing models. A trust relationship between hospitals is used to make the hybrid access control model scalable across hospitals. We also discuss events that have to be logged and where the log has to be maintained. A prototype of security middleware infrastructure is implemented.

  4. A micro-fluidic treadmill for observing suspended plankton in the lab

    NASA Astrophysics Data System (ADS)

    Jaffe, J. S.; Laxton, B.; Garwood, J. C.; Franks, P. J. S.; Roberts, P. L.

    2016-02-01

    A significant obstacle to laboratory studies of interactions between small organisms ( mm) and their fluid environment is our ability to obtain high-resolution images while allowing freedom of motion. This is because as the organisms sink, they will often move out of the field of view of the observation system. One solution to this problem is to impose a water circulation pattern that preserves their location relative to the camera system while imaging the organisms away from the glass walls. To accomplish this we have designed and created a plankton treadmill. Our computer-controlled system consists of a digital video camera attached to a macro or microscope and a micro-fluidic pump whose flow is regulated to maintain a suspended organism's position relative to the field of view. Organisms are detected and tracked in real time in the video frames, allowing a control algorithm to compensate for any vertical movement by adjusting the flow. The flow control can be manually adjusted using on-screen controls, semi-automatically adjusted to allow the user to select a particular organism to be tracked or fully automatic through the use of classification and tracking algorithms. Experiments with a simple cm-sized cuvette and a number of organisms that are both positively and negatively buoyant have demonstrated the success of the system in permitting longer observation times than would be possible in the absence of a controlled-flow environment. The subjects were observed using a new dual-view, holographic imaging system that provides 3-dimensional microscopic observations with relatively isotropic resolution. We will present the system design, construction, the control algorithm, and some images obtained with the holographic system, demonstrating its effectiveness. Small particles seeded into the flow clearly show the 3D flow fields around the subjects as they freely sink or swim.

  5. A safety monitoring system for taxi based on CMOS imager

    NASA Astrophysics Data System (ADS)

    Liu, Zhi

    2005-01-01

    CMOS image sensors now become increasingly competitive with respect to their CCD counterparts, while adding advantages such as no blooming, simpler driving requirements and the potential of on-chip integration of sensor, analogue circuitry, and digital processing functions. A safety monitoring system for taxi based on cmos imager that can record field situation when unusual circumstance happened is described in this paper. The monitoring system is based on a CMOS imager (OV7120), which can output digital image data through parallel pixel data port. The system consists of a CMOS image sensor, a large capacity NAND FLASH ROM, a USB interface chip and a micro controller (AT90S8515). The structure of whole system and the test data is discussed and analyzed in detail.

  6. [Quality control of laser imagers].

    PubMed

    Winkelbauer, F; Ammann, M; Gerstner, N; Imhof, H

    1992-11-01

    Multiformat imagers based on laser systems are used for documentation in an increasing number of investigations. The specific problems of quality control are explained and the persistence of film processing in these imager systems of different configuration with (Machine 1: 3M-Laser-Imager-Plus M952 with connected 3M Film-Processor, 3M-Film IRB, X-Rax Chemical Mixer 3M-XPM, 3M-Developer and Fixer) or without (Machine 2: 3M-Laser-Imager-Plus M952 with separate DuPont-Cronex Film-processor, Kodak IR-Film, Kodak Automixer, Kodak-Developer and Fixer) connected film processing unit are investigated. In our checking based on DIN 6868 and ONORM S 5240 we found persistence of film processing in the equipment with directly adapted film processing unit according to DIN and ONORM. The checking of film persistence as demanded by DIN 6868 in these equipment could therefore be performed in longer periods. Systems with conventional darkroom processing comparatively show plain increased fluctuation, and hence the demanded daily control is essential to guarantee appropriate reaction and constant quality of documentation.

  7. Setting Standards for Reporting and Quantification in Fluorescence-Guided Surgery.

    PubMed

    Hoogstins, Charlotte; Burggraaf, Jan Jaap; Koller, Marjory; Handgraaf, Henricus; Boogerd, Leonora; van Dam, Gooitzen; Vahrmeijer, Alexander; Burggraaf, Jacobus

    2018-05-29

    Intraoperative fluorescence imaging (FI) is a promising technique that could potentially guide oncologic surgeons toward more radical resections and thus improve clinical outcome. Despite the increase in the number of clinical trials, fluorescent agents and imaging systems for intraoperative FI, a standardized approach for imaging system performance assessment and post-acquisition image analysis is currently unavailable. We conducted a systematic, controlled comparison between two commercially available imaging systems using a novel calibration device for FI systems and various fluorescent agents. In addition, we analyzed fluorescence images from previous studies to evaluate signal-to-background ratio (SBR) and determinants of SBR. Using the calibration device, imaging system performance could be quantified and compared, exposing relevant differences in sensitivity. Image analysis demonstrated a profound influence of background noise and the selection of the background on SBR. In this article, we suggest clear approaches for the quantification of imaging system performance assessment and post-acquisition image analysis, attempting to set new standards in the field of FI.

  8. Robot-assisted ultrasound imaging: overview and development of a parallel telerobotic system.

    PubMed

    Monfaredi, Reza; Wilson, Emmanuel; Azizi Koutenaei, Bamshad; Labrecque, Brendan; Leroy, Kristen; Goldie, James; Louis, Eric; Swerdlow, Daniel; Cleary, Kevin

    2015-02-01

    Ultrasound imaging is frequently used in medicine. The quality of ultrasound images is often dependent on the skill of the sonographer. Several researchers have proposed robotic systems to aid in ultrasound image acquisition. In this paper we first provide a short overview of robot-assisted ultrasound imaging (US). We categorize robot-assisted US imaging systems into three approaches: autonomous US imaging, teleoperated US imaging, and human-robot cooperation. For each approach several systems are introduced and briefly discussed. We then describe a compact six degree of freedom parallel mechanism telerobotic system for ultrasound imaging developed by our research team. The long-term goal of this work is to enable remote ultrasound scanning through teleoperation. This parallel mechanism allows for both translation and rotation of an ultrasound probe mounted on the top plate along with force control. Our experimental results confirmed good mechanical system performance with a positioning error of < 1 mm. Phantom experiments by a radiologist showed promising results with good image quality.

  9. CRionScan: A stand-alone real time controller designed to perform ion beam imaging, dose controlled irradiation and proton beam writing

    NASA Astrophysics Data System (ADS)

    Daudin, L.; Barberet, Ph.; Serani, L.; Moretto, Ph.

    2013-07-01

    High resolution ion microbeams, usually used to perform elemental mapping, low dose targeted irradiation or ion beam lithography needs a very flexible beam control system. For this purpose, we have developed a dedicated system (called “CRionScan”), on the AIFIRA facility (Applications Interdisciplinaires des Faisceaux d'Ions en Région Aquitaine). It consists of a stand-alone real-time scanning and imaging instrument based on a Compact Reconfigurable Input/Output (Compact RIO) device from National Instruments™. It is based on a real-time controller, a Field Programmable Gate Array (FPGA), input/output modules and Ethernet connectivity. We have implemented a fast and deterministic beam scanning system interfaced with our commercial data acquisition system without any hardware development. CRionScan is built under LabVIEW™ and has been used on AIFIRA's nanobeam line since 2009 (Barberet et al., 2009, 2011) [1,2]. A Graphical User Interface (GUI) embedded in the Compact RIO as a web page is used to control the scanning parameters. In addition, a fast electrostatic beam blanking trigger has been included in the FPGA and high speed counters (15 MHz) have been implemented to perform dose controlled irradiation and on-line images on the GUI. Analog to Digital converters are used for the beam current measurement and in the near future for secondary electrons imaging. Other functionalities have been integrated in this controller like LED lighting using Pulse Width Modulation and a “NIM Wilkinson ADC” data acquisition.

  10. A Guide to Structured Illumination TIRF Microscopy at High Speed with Multiple Colors

    PubMed Central

    Young, Laurence J.; Ströhl, Florian; Kaminski, Clemens F.

    2016-01-01

    Optical super-resolution imaging with structured illumination microscopy (SIM) is a key technology for the visualization of processes at the molecular level in the chemical and biomedical sciences. Although commercial SIM systems are available, systems that are custom designed in the laboratory can outperform commercial systems, the latter typically designed for ease of use and general purpose applications, both in terms of imaging fidelity and speed. This article presents an in-depth guide to building a SIM system that uses total internal reflection (TIR) illumination and is capable of imaging at up to 10 Hz in three colors at a resolution reaching 100 nm. Due to the combination of SIM and TIRF, the system provides better image contrast than rival technologies. To achieve these specifications, several optical elements are used to enable automated control over the polarization state and spatial structure of the illumination light for all available excitation wavelengths. Full details on hardware implementation and control are given to achieve synchronization between excitation light pattern generation, wavelength, polarization state, and camera control with an emphasis on achieving maximum acquisition frame rate. A step-by-step protocol for system alignment and calibration is presented and the achievable resolution improvement is validated on ideal test samples. The capability for video-rate super-resolution imaging is demonstrated with living cells. PMID:27285848

  11. New image-stabilizing system

    NASA Astrophysics Data System (ADS)

    Zhao, Yuejin

    1996-06-01

    In this paper, a new method for image stabilization with a three-axis image- stabilizing reflecting prism assembly is presented, and the principle of image stabilization in this prism assembly, formulae for image stabilization and working formulae with an approximation up to the third power are given in detail. In this image-stabilizing system, a single chip microcomputer is used to calculate value of compensating angles and thus to control the prism assembly. Two gyroscopes act as sensors from which information of angular perturbation is obtained, three stepping motors drive the prism assembly to compensate for the movement of image produced by angular perturbation. The image-stabilizing device so established is a multifold system which involves optics, mechanics, electronics and computer.

  12. VICAR image processing system guide to system use

    NASA Technical Reports Server (NTRS)

    Seidman, J. B.

    1977-01-01

    The functional characteristics and operating requirements of the VICAR (Video Image Communication and Retrieval) system are described. An introduction to the system describes the functional characteristics and the basic theory of operation. A brief description of the data flow as well as tape and disk formats is also presented. A formal presentation of the control statement formats is given along with a guide to usage of the system. The guide provides a step-by-step reference to the creation of a VICAR control card deck. Simple examples are employed to illustrate the various options and the system response thereto.

  13. The x-ray light valve: a potentially low-cost, digital radiographic imaging system-concept and implementation considerations.

    PubMed

    Webster, Christie Ann; Koprinarov, Ivaylo; Germann, Stephen; Rowlands, J A

    2008-03-01

    New x-ray radiographic systems based on large-area flat-panel technology have revolutionized our capability to produce digital x-ray images. However, these imagers are extraordinarily expensive compared to the systems they are replacing. Hence, there is a need for a low-cost digital imaging system for general applications in radiology. A novel potentially low-cost radiographic imaging system based on established technologies is proposed-the X-Ray Light Valve (XLV). This is a potentially high-quality digital x-ray detector made of a photoconducting layer and a liquid-crystal cell, physically coupled in a sandwich structure. Upon exposure to x rays, charge is collected on the surface of the photoconductor. This causes a change in the optical properties of the liquid-crystal cell and a visible image is generated. Subsequently, it is digitized by a scanned optical imager. The image formation is based on controlled modulation of light from an external source. The operation and practical implementation of the XLV system are described. The potential performance of the complete system and issues related to sensitivity, spatial resolution, noise, and speed are discussed. The feasibility of clinical use of an XLV device based on amorphous selenium (a-Se) as the photoconductor and a reflective electrically controlled birefringence cell is analyzed. The results of our analysis indicate that the XLV can potentially be adapted to a wide variety of radiographic tasks.

  14. Photo-Controlled Waves and Active Locomotion.

    PubMed

    Epstein, Irving R; Gao, Qingyu

    2017-08-22

    Waves of chemical concentration, created by the interaction between reaction and diffusion, occur in a number of chemical systems far from equilibrium. In appropriately chosen polymer gels, these waves generate mechanical forces, which can result in locomotion. When a component of the system is photosensitive, light can be used to modulate and control these waves. In this Concept article, we examine various forms of photo-control of such systems, focusing particularly on the Belousov-Zhabotinsky oscillating chemical reaction. The phenomena we consider include image storage and image processing, feedback-control and feedback-induced clustering of waves, and phototropic and photophobic locomotion. Several of these phenomena have analogues in or potential applications to biological systems. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. A New Approach to Create Image Control Networks in ISIS

    NASA Astrophysics Data System (ADS)

    Becker, K. J.; Berry, K. L.; Mapel, J. A.; Walldren, J. C.

    2017-06-01

    A new approach was used to create a feature-based control point network that required the development of new tools in the Integrated Software for Imagers and Spectrometers (ISIS3) system to process very large datasets.

  16. Monitoring Pest Insect Traps by Means of Low-Power Image Sensor Technologies

    PubMed Central

    López, Otoniel; Rach, Miguel Martinez; Migallon, Hector; Malumbres, Manuel P.; Bonastre, Alberto; Serrano, Juan J.

    2012-01-01

    Monitoring pest insect populations is currently a key issue in agriculture and forestry protection. At the farm level, human operators typically must perform periodical surveys of the traps disseminated through the field. This is a labor-, time- and cost-consuming activity, in particular for large plantations or large forestry areas, so it would be of great advantage to have an affordable system capable of doing this task automatically in an accurate and a more efficient way. This paper proposes an autonomous monitoring system based on a low-cost image sensor that it is able to capture and send images of the trap contents to a remote control station with the periodicity demanded by the trapping application. Our autonomous monitoring system will be able to cover large areas with very low energy consumption. This issue would be the main key point in our study; since the operational live of the overall monitoring system should be extended to months of continuous operation without any kind of maintenance (i.e., battery replacement). The images delivered by image sensors would be time-stamped and processed in the control station to get the number of individuals found at each trap. All the information would be conveniently stored at the control station, and accessible via Internet by means of available network services at control station (WiFi, WiMax, 3G/4G, etc.). PMID:23202232

  17. Monitoring pest insect traps by means of low-power image sensor technologies.

    PubMed

    López, Otoniel; Rach, Miguel Martinez; Migallon, Hector; Malumbres, Manuel P; Bonastre, Alberto; Serrano, Juan J

    2012-11-13

    Monitoring pest insect populations is currently a key issue in agriculture and forestry protection. At the farm level, human operators typically must perform periodical surveys of the traps disseminated through the field. This is a labor-, time- and cost-consuming activity, in particular for large plantations or large forestry areas, so it would be of great advantage to have an affordable system capable of doing this task automatically in an accurate and a more efficient way. This paper proposes an autonomous monitoring system based on a low-cost image sensor that it is able to capture and send images of the trap contents to a remote control station with the periodicity demanded by the trapping application. Our autonomous monitoring system will be able to cover large areas with very low energy consumption. This issue would be the main key point in our study; since the operational live of the overall monitoring system should be extended to months of continuous operation without any kind of maintenance (i.e., battery replacement). The images delivered by image sensors would be time-stamped and processed in the control station to get the number of individuals found at each trap. All the information would be conveniently stored at the control station, and accessible via Internet by means of available network services at control station (WiFi, WiMax, 3G/4G, etc.).

  18. The precision-processing subsystem for the Earth Resources Technology Satellite.

    NASA Technical Reports Server (NTRS)

    Chapelle, W. E.; Bybee, J. E.; Bedross, G. M.

    1972-01-01

    Description of the precision processor, a subsystem in the image-processing system for the Earth Resources Technology Satellite (ERTS). This processor is a special-purpose image-measurement and printing system, designed to process user-selected bulk images to produce 1:1,000,000-scale film outputs and digital image data, presented in a Universal-Transverse-Mercator (UTM) projection. The system will remove geometric and radiometric errors introduced by the ERTS multispectral sensors and by the bulk-processor electron-beam recorder. The geometric transformations required for each input scene are determined by resection computations based on reseau measurements and image comparisons with a special ground-control base contained within the system; the images are then printed and digitized by electronic image-transfer techniques.

  19. Research of flaw image collecting and processing technology based on multi-baseline stereo imaging

    NASA Astrophysics Data System (ADS)

    Yao, Yong; Zhao, Jiguang; Pang, Xiaoyan

    2008-03-01

    Aiming at the practical situations such as accurate optimal design, complex algorithms and precise technical demands of gun bore flaw image collecting, the design frame of a 3-D image collecting and processing system based on multi-baseline stereo imaging was presented in this paper. This system mainly including computer, electrical control box, stepping motor and CCD camera and it can realize function of image collection, stereo matching, 3-D information reconstruction and after-treatments etc. Proved by theoretical analysis and experiment results, images collected by this system were precise and it can slake efficiently the uncertainty problem produced by universally veins or repeated veins. In the same time, this system has faster measure speed and upper measure precision.

  20. Onboard utilization of ground control points for image correction. Volume 2: Analysis and simulation results

    NASA Technical Reports Server (NTRS)

    1981-01-01

    An approach to remote sensing that meets future mission requirements was investigated. The deterministic acquisition of data and the rapid correction of data for radiometric effects and image distortions are the most critical limitations of remote sensing. The following topics are discussed: onboard image correction systems, GCP navigation system simulation, GCP analysis, and image correction analysis measurement.

  1. Compact Microscope Imaging System with Intelligent Controls

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2004-01-01

    The figure presents selected views of a compact microscope imaging system (CMIS) that includes a miniature video microscope, a Cartesian robot (a computer- controlled three-dimensional translation stage), and machine-vision and control subsystems. The CMIS was built from commercial off-the-shelf instrumentation, computer hardware and software, and custom machine-vision software. The machine-vision and control subsystems include adaptive neural networks that afford a measure of artificial intelligence. The CMIS can perform several automated tasks with accuracy and repeatability . tasks that, heretofore, have required the full attention of human technicians using relatively bulky conventional microscopes. In addition, the automation and control capabilities of the system inherently include a capability for remote control. Unlike human technicians, the CMIS is not at risk of becoming fatigued or distracted: theoretically, it can perform continuously at the level of the best human technicians. In its capabilities for remote control and for relieving human technicians of tedious routine tasks, the CMIS is expected to be especially useful in biomedical research, materials science, inspection of parts on industrial production lines, and space science. The CMIS can automatically focus on and scan a microscope sample, find areas of interest, record the resulting images, and analyze images from multiple samples simultaneously. Automatic focusing is an iterative process: The translation stage is used to move the microscope along its optical axis in a succession of coarse, medium, and fine steps. A fast Fourier transform (FFT) of the image is computed at each step, and the FFT is analyzed for its spatial-frequency content. The microscope position that results in the greatest dispersal of FFT content toward high spatial frequencies (indicating that the image shows the greatest amount of detail) is deemed to be the focal position.

  2. Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.

    PubMed

    Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua

    2017-05-01

    In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.

  3. Industrial application of thermal image processing and thermal control

    NASA Astrophysics Data System (ADS)

    Kong, Lingxue

    2001-09-01

    Industrial application of infrared thermography is virtually boundless as it can be used in any situations where there are temperature differences. This technology has particularly been widely used in automotive industry for process evaluation and system design. In this work, thermal image processing technique will be introduced to quantitatively calculate the heat stored in a warm/hot object and consequently, a thermal control system will be proposed to accurately and actively manage the thermal distribution within the object in accordance with the heat calculated from the thermal images.

  4. Fluorescence-enhanced optical tomography and nuclear imaging system for small animals

    NASA Astrophysics Data System (ADS)

    Tan, I.-Chih; Lu, Yujie; Darne, Chinmay; Rasmussen, John C.; Zhu, Banghe; Azhdarinia, Ali; Yan, Shikui; Smith, Anne M.; Sevick-Muraca, Eva M.

    2012-03-01

    Near-infrared (NIR) fluorescence is an alternative modality for molecular imaging that has been demonstrated in animals and recently in humans. Fluorescence-enhanced optical tomography (FEOT) using continuous wave or frequency domain photon migration techniques could be used to provide quantitative molecular imaging in vivo if it could be validated against "gold-standard," nuclear imaging modalities, using dual-labeled imaging agents. Unfortunately, developed FEOT systems are not suitable for incorporation with CT/PET/SPECT scanners because they utilize benchtop devices and require a large footprint. In this work, we developed a miniaturized fluorescence imaging system installed in the gantry of the Siemens Inveon PET/CT scanner to enable NIR transillumination measurements. The system consists of a CCD camera equipped with NIR sensitive intensifier, a diode laser controlled by a single board compact controller, a 2-axis galvanometer, and RF circuit modules for homodyne detection of the phase and amplitude of fluorescence signals. The performance of the FEOT system was tested and characterized. A mouse-shaped solid phantom of uniform optical properties with a fluorescent inclusion was scanned using CT, and NIR fluorescence images at several projections were collected. The method of high-order approximation to the radioactive transfer equation was then used to reconstruct the optical images. Dual-labeled agents were also used on a tumor bearing mouse to validate the results of the FEOT against PET/CT image. The results showed that the location of the fluorophore obtained from the FEOT matches the location of tumor obtained from the PET/CT images. Besides validation of FEOT, this hybrid system could allow multimodal molecular imaging (FEOT/PET/CT) for small animal imaging.

  5. Piezoelectrically Actuated Robotic System for MRI-Guided Prostate Percutaneous Therapy

    PubMed Central

    Su, Hao; Shang, Weijian; Cole, Gregory; Li, Gang; Harrington, Kevin; Camilo, Alexander; Tokuda, Junichi; Tempany, Clare M.; Hata, Nobuhiko; Fischer, Gregory S.

    2014-01-01

    This paper presents a fully-actuated robotic system for percutaneous prostate therapy under continuously acquired live magnetic resonance imaging (MRI) guidance. The system is composed of modular hardware and software to support the surgical workflow of intra-operative MRI-guided surgical procedures. We present the development of a 6-degree-of-freedom (DOF) needle placement robot for transperineal prostate interventions. The robot consists of a 3-DOF needle driver module and a 3-DOF Cartesian motion module. The needle driver provides needle cannula translation and rotation (2-DOF) and stylet translation (1-DOF). A custom robot controller consisting of multiple piezoelectric motor drivers provides precision closed-loop control of piezoelectric motors and enables simultaneous robot motion and MR imaging. The developed modular robot control interface software performs image-based registration, kinematics calculation, and exchanges robot commands and coordinates between the navigation software and the robot controller with a new implementation of the open network communication protocol OpenIGTLink. Comprehensive compatibility of the robot is evaluated inside a 3-Tesla MRI scanner using standard imaging sequences and the signal-to-noise ratio (SNR) loss is limited to 15%. The image deterioration due to the present and motion of robot demonstrates unobservable image interference. Twenty-five targeted needle placements inside gelatin phantoms utilizing an 18-gauge ceramic needle demonstrated 0.87 mm root mean square (RMS) error in 3D Euclidean distance based on MRI volume segmentation of the image-guided robotic needle placement procedure. PMID:26412962

  6. Biological Imaging Capability in the ABRS Facility on ISS

    NASA Technical Reports Server (NTRS)

    Cox, David R.; Murdoch, T.; Regan, M. F.; Meshlberger, R. J.; Mortenson, T. E.; Albino, S. A.; Paul, A. L.; Ferl, R. J.

    2010-01-01

    This slide presentation reviews the Advanced Biological Research System (ABRS) on the International Space Station (ISS) and its biological imaging capability. The ABRS is an environmental control chamber. It has two indpendently controlled Experiment Research Chambers (ERCs) with temperature, relative humidity and carbon dioxide controls. ABRS is a third generation plant growth system. Several experiments are reviewed, with particular interest in the use of Green Fluorescent Protein (GFP) a non-destructive plant stress reporting mechanism, naturally found in jellyfish.

  7. CCD high-speed videography system with new concepts and techniques

    NASA Astrophysics Data System (ADS)

    Zheng, Zengrong; Zhao, Wenyi; Wu, Zhiqiang

    1997-05-01

    A novel CCD high speed videography system with brand-new concepts and techniques is developed by Zhejiang University recently. The system can send a series of short flash pulses to the moving object. All of the parameters, such as flash numbers, flash durations, flash intervals, flash intensities and flash colors, can be controlled according to needs by the computer. A series of moving object images frozen by flash pulses, carried information of moving object, are recorded by a CCD video camera, and result images are sent to a computer to be frozen, recognized and processed with special hardware and software. Obtained parameters can be displayed, output as remote controlling signals or written into CD. The highest videography frequency is 30,000 images per second. The shortest image freezing time is several microseconds. The system has been applied to wide fields of energy, chemistry, medicine, biological engineering, aero- dynamics, explosion, multi-phase flow, mechanics, vibration, athletic training, weapon development and national defense engineering. It can also be used in production streamline to carry out the online, real-time monitoring and controlling.

  8. Automatic retinal interest evaluation system (ARIES).

    PubMed

    Yin, Fengshou; Wong, Damon Wing Kee; Yow, Ai Ping; Lee, Beng Hai; Quan, Ying; Zhang, Zhuo; Gopalakrishnan, Kavitha; Li, Ruoying; Liu, Jiang

    2014-01-01

    In recent years, there has been increasing interest in the use of automatic computer-based systems for the detection of eye diseases such as glaucoma, age-related macular degeneration and diabetic retinopathy. However, in practice, retinal image quality is a big concern as automatic systems without consideration of degraded image quality will likely generate unreliable results. In this paper, an automatic retinal image quality assessment system (ARIES) is introduced to assess both image quality of the whole image and focal regions of interest. ARIES achieves 99.54% accuracy in distinguishing fundus images from other types of images through a retinal image identification step in a dataset of 35342 images. The system employs high level image quality measures (HIQM) to perform image quality assessment, and achieves areas under curve (AUCs) of 0.958 and 0.987 for whole image and optic disk region respectively in a testing dataset of 370 images. ARIES acts as a form of automatic quality control which ensures good quality images are used for processing, and can also be used to alert operators of poor quality images at the time of acquisition.

  9. Fast energy spectrum and transverse beam profile monitoring and feedback systems for the SLC linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soderstrom, E.J.; Abrams, G.S.; Weinstein, A.J.

    Fast energy spectrum and transverse beam profile monitoring systems have been tested at the SLC. The signals for each system are derived from digitizations of images on phosphor screens. Individual beam bunch images are digitized in the case of the transverse profile system and synchrotron radiation images produced by wiggler magnets for the energy spectrum. Measurements are taken at two-second intervals. Feedback elements have been installed for future use and consist of rf phase shifters to control energy spectrum and dipole correctors to control the beam launch into the linac affecting the transverse beam profile. Details of these systems, includingmore » hardware, timing, data acquisition, data reduction, measurement accuracy, and operational experience will be presented. 9 refs.« less

  10. Image acquisition system for traffic monitoring applications

    NASA Astrophysics Data System (ADS)

    Auty, Glen; Corke, Peter I.; Dunn, Paul; Jensen, Murray; Macintyre, Ian B.; Mills, Dennis C.; Nguyen, Hao; Simons, Ben

    1995-03-01

    An imaging system for monitoring traffic on multilane highways is discussed. The system, named Safe-T-Cam, is capable of operating 24 hours per day in all but extreme weather conditions and can capture still images of vehicles traveling up to 160 km/hr. Systems operating at different remote locations are networked to allow transmission of images and data to a control center. A remote site facility comprises a vehicle detection and classification module (VCDM), an image acquisition module (IAM) and a license plate recognition module (LPRM). The remote site is connected to the central site by an ISDN communications network. The remote site system is discussed in this paper. The VCDM consists of a video camera, a specialized exposure control unit to maintain consistent image characteristics, and a 'real-time' image processing system that processes 50 images per second. The VCDM can detect and classify vehicles (e.g. cars from trucks). The vehicle class is used to determine what data should be recorded. The VCDM uses a vehicle tracking technique to allow optimum triggering of the high resolution camera of the IAM. The IAM camera combines the features necessary to operate consistently in the harsh environment encountered when imaging a vehicle 'head-on' in both day and night conditions. The image clarity obtained is ideally suited for automatic location and recognition of the vehicle license plate. This paper discusses the camera geometry, sensor characteristics and the image processing methods which permit consistent vehicle segmentation from a cluttered background allowing object oriented pattern recognition to be used for vehicle classification. The image capture of high resolution images and the image characteristics required for the LPRMs automatic reading of vehicle license plates, is also discussed. The results of field tests presented demonstrate that the vision based Safe-T-Cam system, currently installed on open highways, is capable of producing automatic classification of vehicle class and recording of vehicle numberplates with a success rate around 90 percent in a period of 24 hours.

  11. Image-guided positioning and tracking.

    PubMed

    Ruan, Dan; Kupelian, Patrick; Low, Daniel A

    2011-01-01

    Radiation therapy aims at maximizing tumor control while minimizing normal tissue complication. The introduction of stereotactic treatment explores the volume effect and achieves dose escalation to tumor target with small margins. The use of ablative irradiation dose and sharp dose gradients requires accurate tumor definition and alignment between patient and treatment geometry. Patient geometry variation during treatment may significantly compromise the conformality of delivered dose and must be managed properly. Setup error and interfraction/intrafraction motion are incorporated in the target definition process by expanding the clinical target volume to planning target volume, whereas the alignment between patient and treatment geometry is obtained with an adaptive control process, by taking immediate actions in response to closely monitored patient geometry. This article focuses on the monitoring and adaptive response aspect of the problem. The term "image" in "image guidance" will be used in a most general sense, to be inclusive of some important point-based monitoring systems that can be considered as degenerate cases of imaging. Image-guided motion adaptive control, as a comprehensive system, involves a hierarchy of decisions, each of which balances simplicity versus flexibility and accuracy versus robustness. Patient specifics and machine specifics at the treatment facility also need to be incorporated into the decision-making process. Identifying operation bottlenecks from a system perspective and making informed compromises are crucial in the proper selection of image-guidance modality, the motion management mechanism, and the respective operation modes. Not intended as an exhaustive exposition, this article focuses on discussing the major issues and development principles for image-guided motion management systems. We hope these information and methodologies will facilitate conscientious practitioners to adopt image-guided motion management systems accounting for patient and institute specifics and to embrace advances in knowledge and new technologies subsequent to the publication of this article.

  12. Guidance and control 1989; Proceedings of the Annual Rocky Mountain Guidance and Control Conference, Keystone, CO, Feb. 4-8, 1989

    NASA Astrophysics Data System (ADS)

    Culp, Robert D.; Lewis, Robert A.

    1989-05-01

    Papers are presented on advances in guidance, navigation, and control; guidance and control storyboard displays; attitude referenced pointing systems; guidance, navigation, and control for specialized missions; and recent experiences. Other topics of importance to support the application of guidance and control to the space community include concept design and performance test of a magnetically suspended single-gimbal control moment gyro; design, fabrication and test of a prototype double gimbal control moment gyroscope for the NASA Space Station; the Circumstellar Imaging Telescope Image Motion Compensation System providing ultra-precise control on the Space Station platform; pinpointing landing concepts for the Mars Rover Sample Return mission; and space missile guidance and control simulation and flight testing.

  13. Computational Modeling and Real-Time Control of Patient-Specific Laser Treatment of Cancer

    PubMed Central

    Fuentes, D.; Oden, J. T.; Diller, K. R.; Hazle, J. D.; Elliott, A.; Shetty, A.; Stafford, R. J.

    2014-01-01

    An adaptive feedback control system is presented which employs a computational model of bioheat transfer in living tissue to guide, in real-time, laser treatments of prostate cancer monitored by magnetic resonance thermal imaging (MRTI). The system is built on what can be referred to as cyberinfrastructure - a complex structure of high-speed network, large-scale parallel computing devices, laser optics, imaging, visualizations, inverse-analysis algorithms, mesh generation, and control systems that guide laser therapy to optimally control the ablation of cancerous tissue. The computational system has been successfully tested on in-vivo, canine prostate. Over the course of an 18 minute laser induced thermal therapy (LITT) performed at M.D. Anderson Cancer Center (MDACC) in Houston, Texas, the computational models were calibrated to intra-operative real time thermal imaging treatment data and the calibrated models controlled the bioheat transfer to within 5°C of the predetermined treatment plan. The computational arena is in Austin, Texas and managed at the Institute for Computational Engineering and Sciences (ICES). The system is designed to control the bioheat transfer remotely while simultaneously providing real-time remote visualization of the on-going treatment. Post operative histology of the canine prostate reveal that the damage region was within the targeted 1.2cm diameter treatment objective. PMID:19148754

  14. Computational modeling and real-time control of patient-specific laser treatment of cancer.

    PubMed

    Fuentes, D; Oden, J T; Diller, K R; Hazle, J D; Elliott, A; Shetty, A; Stafford, R J

    2009-04-01

    An adaptive feedback control system is presented which employs a computational model of bioheat transfer in living tissue to guide, in real-time, laser treatments of prostate cancer monitored by magnetic resonance thermal imaging. The system is built on what can be referred to as cyberinfrastructure-a complex structure of high-speed network, large-scale parallel computing devices, laser optics, imaging, visualizations, inverse-analysis algorithms, mesh generation, and control systems that guide laser therapy to optimally control the ablation of cancerous tissue. The computational system has been successfully tested on in vivo, canine prostate. Over the course of an 18 min laser-induced thermal therapy performed at M.D. Anderson Cancer Center (MDACC) in Houston, Texas, the computational models were calibrated to intra-operative real-time thermal imaging treatment data and the calibrated models controlled the bioheat transfer to within 5 degrees C of the predetermined treatment plan. The computational arena is in Austin, Texas and managed at the Institute for Computational Engineering and Sciences (ICES). The system is designed to control the bioheat transfer remotely while simultaneously providing real-time remote visualization of the on-going treatment. Post-operative histology of the canine prostate reveal that the damage region was within the targeted 1.2 cm diameter treatment objective.

  15. When the fl# Is Not the fl#.

    ERIC Educational Resources Information Center

    Biermann, Mark L.; Biermann, Lois A. A.

    1996-01-01

    Discusses descriptions of the way in which an optical system controls the quantity of light that reaches a point on the image plane, a basic feature of optical imaging systems such as cameras, telescopes, and microscopes. (JRH)

  16. The low-order wavefront control system for the PICTURE-C mission: high-speed image acquisition and processing

    NASA Astrophysics Data System (ADS)

    Hewawasam, Kuravi; Mendillo, Christopher B.; Howe, Glenn A.; Martel, Jason; Finn, Susanna C.; Cook, Timothy A.; Chakrabarti, Supriya

    2017-09-01

    The Planetary Imaging Concept Testbed Using a Recoverable Experiment - Coronagraph (PICTURE-C) mission will directly image debris disks and exozodiacal dust around nearby stars from a high-altitude balloon using a vector vortex coronagraph. The PICTURE-C low-order wavefront control (LOWC) system will be used to correct time-varying low-order aberrations due to pointing jitter, gravity sag, thermal deformation, and the gondola pendulum motion. We present the hardware and software implementation of the low-order ShackHartmann and reflective Lyot stop sensors. Development of the high-speed image acquisition and processing system is discussed with the emphasis on the reduction of hardware and computational latencies through the use of a real-time operating system and optimized data handling. By characterizing all of the LOWC latencies, we describe techniques to achieve a framerate of 200 Hz with a mean latency of ˜378 μs

  17. Design of a dataway processor for a parallel image signal processing system

    NASA Astrophysics Data System (ADS)

    Nomura, Mitsuru; Fujii, Tetsuro; Ono, Sadayasu

    1995-04-01

    Recently, demands for high-speed signal processing have been increasing especially in the field of image data compression, computer graphics, and medical imaging. To achieve sufficient power for real-time image processing, we have been developing parallel signal-processing systems. This paper describes a communication processor called 'dataway processor' designed for a new scalable parallel signal-processing system. The processor has six high-speed communication links (Dataways), a data-packet routing controller, a RISC CORE, and a DMA controller. Each communication link operates at 8-bit parallel in a full duplex mode at 50 MHz. Moreover, data routing, DMA, and CORE operations are processed in parallel. Therefore, sufficient throughput is available for high-speed digital video signals. The processor is designed in a top- down fashion using a CAD system called 'PARTHENON.' The hardware is fabricated using 0.5-micrometers CMOS technology, and its hardware is about 200 K gates.

  18. TECHNIQUES FOR HIGH-CONTRAST IMAGING IN MULTI-STAR SYSTEMS. I. SUPER-NYQUIST WAVEFRONT CONTROL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, S.; Belikov, R.; Bendek, E.

    2015-09-01

    Direct imaging of extra-solar planets is now a reality with the deployment and commissioning of the first generation of specialized ground-based instruments (GPI, SPHERE, P1640, and SCExAO). These systems allow of planets 10{sup 7} times fainter than their host star. For space-based missions (EXCEDE, EXO-C, EXO-S, WFIRST), various teams have demonstrated laboratory contrasts reaching 10{sup −10} within a few diffraction limits from the star. However, all of these current and future systems are designed to detect faint planets around a single host star, while most non-M-dwarf stars such as Alpha Centauri belong to multi-star systems. Direct imaging around binaries/multiple systemsmore » at a level of contrast allowing detection of Earth-like planets is challenging because the region of interest is contaminated by the host star's companion in addition to the host itself. Generally, the light leakage is caused by both diffraction and aberrations in the system. Moreover, the region of interest usually falls outside the correcting zone of the deformable mirror (DM) with respect to the companion. Until now, it has been thought that removing the light of a companion star is too challenging, leading to the exclusion of many binary systems from target lists of direct imaging coronographic missions. In this paper, we will show new techniques for high-contrast imaging of planets around multi-star systems and detail the Super-Nyquist Wavefront Control (SNWC) method, which allows wavefront errors to be controlled beyond the nominal control region of the DM. Our simulations have demonstrated that, with SNWC, raw contrasts of at least 5 × 10{sup −9} in a 10% bandwidth are possible.« less

  19. In-flight control and communication architecture of the GLORIA imaging limb sounder on atmospheric research aircraft

    NASA Astrophysics Data System (ADS)

    Kretschmer, E.; Bachner, M.; Blank, J.; Dapp, R.; Ebersoldt, A.; Friedl-Vallon, F.; Guggenmoser, T.; Gulde, T.; Hartmann, V.; Lutz, R.; Maucher, G.; Neubert, T.; Oelhaf, H.; Preusse, P.; Schardt, G.; Schmitt, C.; Schönfeld, A.; Tan, V.

    2015-06-01

    The Gimballed Limb Observer for Radiance Imaging of the Atmosphere (GLORIA), a Fourier-transform-spectrometer-based limb spectral imager, operates on high-altitude research aircraft to study the transit region between the troposphere and the stratosphere. It is one of the most sophisticated systems to be flown on research aircraft in Europe, requiring constant monitoring and human intervention in addition to an automation system. To ensure proper functionality and interoperability on multiple platforms, a flexible control and communication system was laid out. The architectures of the communication system as well as the protocols used are reviewed. The integration of this architecture in the automation process as well as the scientific campaign flight application context are discussed.

  20. In-flight control and communication architecture of the GLORIA imaging limb-sounder on atmospheric research aircraft

    NASA Astrophysics Data System (ADS)

    Kretschmer, E.; Bachner, M.; Blank, J.; Dapp, R.; Ebersoldt, A.; Friedl-Vallon, F.; Guggenmoser, T.; Gulde, T.; Hartmann, V.; Lutz, R.; Maucher, G.; Neubert, T.; Oelhaf, H.; Preusse, P.; Schardt, G.; Schmitt, C.; Schönfeld, A.; Tan, V.

    2015-02-01

    The Gimballed Limb Observer for Radiance Imaging of the Atmosphere (GLORIA), a Fourier transform spectrometer based limb spectral imager, operates on high-altitude research aircraft to study the transit region between the troposphere and the stratosphere. It is one of the most sophisticated systems to be flown on research aircraft in Europe, requiring constant monitoring and human intervention in addition to an automation system. To ensure proper functionality and interoperability on multiple platforms, a flexible control and communication system was laid out. The architectures of the communication system as well as the protocols used are reviewed. The integration of this architecture in the automation process as well as the scientific campaign flight application context are discussed.

  1. Airborne multicamera system for geo-spatial applications

    NASA Astrophysics Data System (ADS)

    Bachnak, Rafic; Kulkarni, Rahul R.; Lyle, Stacey; Steidley, Carl W.

    2003-08-01

    Airborne remote sensing has many applications that include vegetation detection, oceanography, marine biology, geographical information systems, and environmental coastal science analysis. Remotely sensed images, for example, can be used to study the aftermath of episodic events such as the hurricanes and floods that occur year round in the coastal bend area of Corpus Christi. This paper describes an Airborne Multi-Spectral Imaging System that uses digital cameras to provide high resolution at very high rates. The software is based on Delphi 5.0 and IC Imaging Control's ActiveX controls. Both time and the GPS coordinates are recorded. Three successful test flights have been conducted so far. The paper present flight test results and discusses the issues being addressed to fully develop the system.

  2. Image-guided plasma therapy of cutaneous wound

    NASA Astrophysics Data System (ADS)

    Zhang, Zhiwu; Ren, Wenqi; Yu, Zelin; Zhang, Shiwu; Yue, Ting; Xu, Ronald

    2014-02-01

    The wound healing process involves the reparative phases of inflammation, proliferation, and remodeling. Interrupting any of these phases may result in chronically unhealed wounds, amputation, or even patient death. Despite the clinical significance in chronic wound management, no effective methods have been developed for quantitative image-guided treatment. We integrated a multimodal imaging system with a cold atmospheric plasma probe for image-guided treatment of chronic wound. Multimodal imaging system offers a non-invasive, painless, simultaneous and quantitative assessment of cutaneous wound healing. Cold atmospheric plasma accelerates the wound healing process through many mechanisms including decontamination, coagulation and stimulation of the wound healing. The therapeutic effect of cold atmospheric plasma is studied in vivo under the guidance of a multimodal imaging system. Cutaneous wounds are created on the dorsal skin of the nude mice. During the healing process, the sample wound is treated by cold atmospheric plasma at different controlled dosage, while the control wound is healed naturally. The multimodal imaging system integrating a multispectral imaging module and a laser speckle imaging module is used to collect the information of cutaneous tissue oxygenation (i.e. oxygen saturation, StO2) and blood perfusion simultaneously to assess and guide the plasma therapy. Our preliminary tests show that cold atmospheric plasma in combination with multimodal imaging guidance has the potential to facilitate the healing of chronic wounds.

  3. Weighted feature selection criteria for visual servoing of a telerobot

    NASA Technical Reports Server (NTRS)

    Feddema, John T.; Lee, C. S. G.; Mitchell, O. R.

    1989-01-01

    Because of the continually changing environment of a space station, visual feedback is a vital element of a telerobotic system. A real time visual servoing system would allow a telerobot to track and manipulate randomly moving objects. Methodologies for the automatic selection of image features to be used to visually control the relative position between an eye-in-hand telerobot and a known object are devised. A weighted criteria function with both image recognition and control components is used to select the combination of image features which provides the best control. Simulation and experimental results of a PUMA robot arm visually tracking a randomly moving carburetor gasket with a visual update time of 70 milliseconds are discussed.

  4. Fuzzy logic controller for the LOLA AO tip-tilt corrector system

    NASA Astrophysics Data System (ADS)

    Sotelo, Pablo D.; Flores, Ruben; Garfias, Fernando; Cuevas, Salvador

    1998-09-01

    At the INSTITUTO DE ASTRONOMIA we developed an adaptive optics system for the correction of the two first orders of the Zernike polynomials measuring the image controid. Here we discus the two system modalities based in two different control strategies and we present simulations comparing the systems. For the classic control system we present telescope results.

  5. Onboard utilization of ground control points for image correction. Volume 3: Ground control point simulation software design

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The software developed to simulate the ground control point navigation system is described. The Ground Control Point Simulation Program (GCPSIM) is designed as an analysis tool to predict the performance of the navigation system. The system consists of two star trackers, a global positioning system receiver, a gyro package, and a landmark tracker.

  6. Techniques for High-contrast Imaging in Multi-star Systems. II. Multi-star Wavefront Control

    NASA Astrophysics Data System (ADS)

    Sirbu, D.; Thomas, S.; Belikov, R.; Bendek, E.

    2017-11-01

    Direct imaging of exoplanets represents a challenge for astronomical instrumentation due to the high-contrast ratio and small angular separation between the host star and the faint planet. Multi-star systems pose additional challenges for coronagraphic instruments due to the diffraction and aberration leakage caused by companion stars. Consequently, many scientifically valuable multi-star systems are excluded from direct imaging target lists for exoplanet surveys and characterization missions. Multi-star Wavefront Control (MSWC) is a technique that uses a coronagraphic instrument’s deformable mirror (DM) to create high-contrast regions in the focal plane in the presence of multiple stars. MSWC uses “non-redundant” modes on the DM to independently control speckles from each star in the dark zone. Our previous paper also introduced the Super-Nyquist wavefront control technique, which uses a diffraction grating to generate high-contrast regions beyond the Nyquist limit (nominal region correctable by the DM). These two techniques can be combined as MSWC-s to generate high-contrast regions for multi-star systems at wide (Super-Nyquist) angular separations, while MSWC-0 refers to close (Sub-Nyquist) angular separations. As a case study, a high-contrast wavefront control simulation that applies these techniques shows that the habitable region of the Alpha Centauri system can be imaged with a small aperture at 8× {10}-9 mean raw contrast in 10% broadband light in one-sided dark holes from 1.6-5.5 λ/D. Another case study using a larger 2.4 m aperture telescope such as the Wide-Field Infrared Survey Telescope uses these techniques to image the habitable zone of Alpha Centauri at 3.2× {10}-9 mean raw contrast in monochromatic light.

  7. In vivo imaging of inducible tyrosinase gene expression with an ultrasound array-based photoacoustic system

    NASA Astrophysics Data System (ADS)

    Harrison, Tyler; Paproski, Robert J.; Zemp, Roger J.

    2012-02-01

    Tyrosinase, a key enzyme in the production of melanin, has shown promise as a reporter of genetic activity. While green fluorescent protein has been used extensively in this capacity, it is limited in its ability to provide information deep in tissue at a reasonable resolution. As melanin is a strong absorber of light, it is possible to image gene expression using tyrosinase with photoacoustic imaging technologies, resulting in excellent resolutions at multiple-centimeter depths. While our previous work has focused on creating and imaging MCF-7 cells with doxycycline-controlled tyrosinase expression, we have now established the viability of these cells in a murine model. Using an array-based photoacoustic imaging system with 5 MHz center frequency, we capture interleaved ultrasound and photoacoustic images of tyrosinase-expressing MCF-7 tumors both in a tissue mimicking phantom, and in vivo. Images of both the tyrosinase-expressing tumor and a control tumor are presented as both coregistered ultrasound-photoacoustic B-scan images and 3-dimensional photoacoustic volumes created by mechanically scanning the transducer. We find that the tyrosinase-expressing tumor is visible with a signal level 12dB greater than that of the control tumor in vivo. Phantom studies with excised tumors show that the tyrosinase-expressing tumor is visible at depths in excess of 2cm, and have suggested that our imaging system is sensitive to a transfection rate of less than 1%.

  8. Setup for testing cameras for image guided surgery using a controlled NIR fluorescence mimicking light source and tissue phantom

    NASA Astrophysics Data System (ADS)

    Georgiou, Giota; Verdaasdonk, Rudolf M.; van der Veen, Albert; Klaessens, John H.

    2017-02-01

    In the development of new near-infrared (NIR) fluorescence dyes for image guided surgery, there is a need for new NIR sensitive camera systems that can easily be adjusted to specific wavelength ranges in contrast the present clinical systems that are only optimized for ICG. To test alternative camera systems, a setup was developed to mimic the fluorescence light in a tissue phantom to measure the sensitivity and resolution. Selected narrow band NIR LED's were used to illuminate a 6mm diameter circular diffuse plate to create uniform intensity controllable light spot (μW-mW) as target/source for NIR camera's. Layers of (artificial) tissue with controlled thickness could be placed on the spot to mimic a fluorescent `cancer' embedded in tissue. This setup was used to compare a range of NIR sensitive consumer's cameras for potential use in image guided surgery. The image of the spot obtained with the cameras was captured and analyzed using ImageJ software. Enhanced CCD night vision cameras were the most sensitive capable of showing intensities < 1 μW through 5 mm of tissue. However, there was no control over the automatic gain and hence noise level. NIR sensitive DSLR cameras proved relative less sensitive but could be fully manually controlled as to gain (ISO 25600) and exposure time and are therefore preferred for a clinical setting in combination with Wi-Fi remote control. The NIR fluorescence testing setup proved to be useful for camera testing and can be used for development and quality control of new NIR fluorescence guided surgery equipment.

  9. A Star Image Extractor for the Nano-JASMINE satellite

    NASA Astrophysics Data System (ADS)

    Yamauchi, M.; Gouda, N.; Kobayashi, Y.; Tsujimoto, T.; Yano, T.; Suganuma, M.; Yamada, Y.; Nakasuka, S.; Sako, N.

    2008-07-01

    We have developped a software of Star-Image-Extractor (SIE) which works as the on-board real-time image processor. It detects and extracts only the object data from raw image data. SIE has two functions: reducing image data and providing data for the satellite's high accuracy attitude control system.

  10. 25 CFR 542.13 - What are the minimum internal control standards for gaming machines?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    .... (j) Player tracking system. (1) The following standards apply if a player tracking system is utilized... image on the computer screen; (B) Comparing the customer to image on customer's picture ID; or (C...

  11. 25 CFR 542.13 - What are the minimum internal control standards for gaming machines?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    .... (j) Player tracking system. (1) The following standards apply if a player tracking system is utilized... image on the computer screen; (B) Comparing the customer to image on customer's picture ID; or (C...

  12. 25 CFR 542.13 - What are the minimum internal control standards for gaming machines?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    .... (j) Player tracking system. (1) The following standards apply if a player tracking system is utilized... image on the computer screen; (B) Comparing the customer to image on customer's picture ID; or (C...

  13. 25 CFR 542.13 - What are the minimum internal control standards for gaming machines?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    .... (j) Player tracking system. (1) The following standards apply if a player tracking system is utilized... image on the computer screen; (B) Comparing the customer to image on customer's picture ID; or (C...

  14. 25 CFR 542.13 - What are the minimum internal control standards for gaming machines?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    .... (j) Player tracking system. (1) The following standards apply if a player tracking system is utilized... image on the computer screen; (B) Comparing the customer to image on customer's picture ID; or (C...

  15. Plane development of lateral surfaces for inspection systems

    NASA Astrophysics Data System (ADS)

    Francini, F.; Fontani, D.; Jafrancesco, D.; Mercatelli, L.; Sansoni, P.

    2006-08-01

    The problem of developing the lateral surfaces of a 3D object can arise in item inspection using automated imaging systems. In an industrial environment, these control systems typically work at high rate and they have to assure a reliable inspection of the single item. For compactness requirements it is not convenient to utilise three or four CCD cameras to control all the lateral surfaces of an object. Moreover it is impossible to mount optical components near the object if it is placed on a conveyor belt. The paper presents a system that integrates on a single CCD picture the images of both the frontal surface and the lateral surface of an object. It consists of a freeform lens mounted in front of a CCD camera with a commercial lens. The aim is to have a good magnification of the lateral surface, maintaining a low aberration level for exploiting the pictures in an image processing software. The freeform lens, made in plastics, redirects the light coming from the object to the camera lens. The final result is to obtain on the CCD: - the frontal and lateral surface images, with a selected magnification (even with two different values for the two images); - a gap between these two images, so an automatic method to analyse the images can be easily applied. A simple method to design the freeform lens is illustrated. The procedure also allows to obtain the imaging system modifying a current inspection system reducing the cost.

  16. Design of UAV high resolution image transmission system

    NASA Astrophysics Data System (ADS)

    Gao, Qiang; Ji, Ming; Pang, Lan; Jiang, Wen-tao; Fan, Pengcheng; Zhang, Xingcheng

    2017-02-01

    In order to solve the problem of the bandwidth limitation of the image transmission system on UAV, a scheme with image compression technology for mini UAV is proposed, based on the requirements of High-definition image transmission system of UAV. The video codec standard H.264 coding module and key technology was analyzed and studied for UAV area video communication. Based on the research of high-resolution image encoding and decoding technique and wireless transmit method, The high-resolution image transmission system was designed on architecture of Android and video codec chip; the constructed system was confirmed by experimentation in laboratory, the bit-rate could be controlled easily, QoS is stable, the low latency could meets most applied requirement not only for military use but also for industrial applications.

  17. A photoacoustic tomography and ultrasound combined system for proximal interphalangeal joint imaging

    NASA Astrophysics Data System (ADS)

    Xu, Guan; Rajian, Justin R.; Girish, Gandikota; Wang, Xueding

    2013-03-01

    A photoacoustic (PA) and ultrasound (US) dual modality system for imaging human peripheral joints is introduced. The system utilizes a commercial US unit for both US control imaging and PA signal acquisition. Preliminary in vivo evaluation of the system on normal volunteers revealed that this system can recover both the structural and functional information of intra- and extra-articular tissues. Presenting both morphological and pathological information in joint, this system holds promise for diagnosis and characterization of inflammatory joint diseases such as rheumatoid arthritis.

  18. Approaches to creating and controlling motion in MRI.

    PubMed

    Fischer, Gregory S; Cole, Gregory; Su, Hao

    2011-01-01

    Magnetic Resonance Imaging (MRI) can provide three dimensional (3D) imaging with excellent resolution and sensitivity making it ideal for guiding and monitoring interventions. The development of MRI-compatible interventional devices is complicated by factors including: the high magnetic field strength, the requirement that such devices should not degrade image quality, and the confined physical space of the scanner bore. Numerous MRI guided actuated devices have been developed or are currently being developed utilizing piezoelectric actuators as their primary means of mechanical energy generation to enable better interventional procedure performance. While piezoelectric actuators are highly desirable for MRI guided actuation for their precision, high holding force, and non-magnetic operation they are often found to cause image degradation on a large enough to scale to render live imaging unusable. This paper describes a newly developed piezoelectric actuator driver and control system designed to drive a variety of both harmonic and non-harmonic motors that has been demonstrated to be capable of operating both harmonic and non-harmonic piezoelectric actuators with less than 5% SNR loss under closed loop control. The proposed system device allows for a single controller to control any supported actuator and feedback sensor without any physical hardware changes.

  19. Telemedicine optoelectronic biomedical data processing system

    NASA Astrophysics Data System (ADS)

    Prosolovska, Vita V.

    2010-08-01

    The telemedicine optoelectronic biomedical data processing system is created to share medical information for the control of health rights and timely and rapid response to crisis. The system includes the main blocks: bioprocessor, analog-digital converter biomedical images, optoelectronic module for image processing, optoelectronic module for parallel recording and storage of biomedical imaging and matrix screen display of biomedical images. Rated temporal characteristics of the blocks defined by a particular triggering optoelectronic couple in analog-digital converters and time imaging for matrix screen. The element base for hardware implementation of the developed matrix screen is integrated optoelectronic couples produced by selective epitaxy.

  20. Enhanced Automated Guidance System for Horizontal Auger Boring Based on Image Processing

    PubMed Central

    Wu, Lingling; Wen, Guojun; Wang, Yudan; Huang, Lei; Zhou, Jiang

    2018-01-01

    Horizontal auger boring (HAB) is a widely used trenchless technology for the high-accuracy installation of gravity or pressure pipelines on line and grade. Differing from other pipeline installations, HAB requires a more precise and automated guidance system for use in a practical project. This paper proposes an economic and enhanced automated optical guidance system, based on optimization research of light-emitting diode (LED) light target and five automated image processing bore-path deviation algorithms. An LED target was optimized for many qualities, including light color, filter plate color, luminous intensity, and LED layout. The image preprocessing algorithm, feature extraction algorithm, angle measurement algorithm, deflection detection algorithm, and auto-focus algorithm, compiled in MATLAB, are used to automate image processing for deflection computing and judging. After multiple indoor experiments, this guidance system is applied in a project of hot water pipeline installation, with accuracy controlled within 2 mm in 48-m distance, providing accurate line and grade controls and verifying the feasibility and reliability of the guidance system. PMID:29462855

  1. Enhanced Automated Guidance System for Horizontal Auger Boring Based on Image Processing.

    PubMed

    Wu, Lingling; Wen, Guojun; Wang, Yudan; Huang, Lei; Zhou, Jiang

    2018-02-15

    Horizontal auger boring (HAB) is a widely used trenchless technology for the high-accuracy installation of gravity or pressure pipelines on line and grade. Differing from other pipeline installations, HAB requires a more precise and automated guidance system for use in a practical project. This paper proposes an economic and enhanced automated optical guidance system, based on optimization research of light-emitting diode (LED) light target and five automated image processing bore-path deviation algorithms. An LED light target was optimized for many qualities, including light color, filter plate color, luminous intensity, and LED layout. The image preprocessing algorithm, direction location algorithm, angle measurement algorithm, deflection detection algorithm, and auto-focus algorithm, compiled in MATLAB, are used to automate image processing for deflection computing and judging. After multiple indoor experiments, this guidance system is applied in a project of hot water pipeline installation, with accuracy controlled within 2 mm in 48-m distance, providing accurate line and grade controls and verifying the feasibility and reliability of the guidance system.

  2. A semi-automated image analysis procedure for in situ plankton imaging systems.

    PubMed

    Bi, Hongsheng; Guo, Zhenhua; Benfield, Mark C; Fan, Chunlei; Ford, Michael; Shahrestani, Suzan; Sieracki, Jeffery M

    2015-01-01

    Plankton imaging systems are capable of providing fine-scale observations that enhance our understanding of key physical and biological processes. However, processing the large volumes of data collected by imaging systems remains a major obstacle for their employment, and existing approaches are designed either for images acquired under laboratory controlled conditions or within clear waters. In the present study, we developed a semi-automated approach to analyze plankton taxa from images acquired by the ZOOplankton VISualization (ZOOVIS) system within turbid estuarine waters, in Chesapeake Bay. When compared to images under laboratory controlled conditions or clear waters, images from highly turbid waters are often of relatively low quality and more variable, due to the large amount of objects and nonlinear illumination within each image. We first customized a segmentation procedure to locate objects within each image and extracted them for classification. A maximally stable extremal regions algorithm was applied to segment large gelatinous zooplankton and an adaptive threshold approach was developed to segment small organisms, such as copepods. Unlike the existing approaches for images acquired from laboratory, controlled conditions or clear waters, the target objects are often the majority class, and the classification can be treated as a multi-class classification problem. We customized a two-level hierarchical classification procedure using support vector machines to classify the target objects (< 5%), and remove the non-target objects (> 95%). First, histograms of oriented gradients feature descriptors were constructed for the segmented objects. In the first step all non-target and target objects were classified into different groups: arrow-like, copepod-like, and gelatinous zooplankton. Each object was passed to a group-specific classifier to remove most non-target objects. After the object was classified, an expert or non-expert then manually removed the non-target objects that could not be removed by the procedure. The procedure was tested on 89,419 images collected in Chesapeake Bay, and results were consistent with visual counts with >80% accuracy for all three groups.

  3. A Semi-Automated Image Analysis Procedure for In Situ Plankton Imaging Systems

    PubMed Central

    Bi, Hongsheng; Guo, Zhenhua; Benfield, Mark C.; Fan, Chunlei; Ford, Michael; Shahrestani, Suzan; Sieracki, Jeffery M.

    2015-01-01

    Plankton imaging systems are capable of providing fine-scale observations that enhance our understanding of key physical and biological processes. However, processing the large volumes of data collected by imaging systems remains a major obstacle for their employment, and existing approaches are designed either for images acquired under laboratory controlled conditions or within clear waters. In the present study, we developed a semi-automated approach to analyze plankton taxa from images acquired by the ZOOplankton VISualization (ZOOVIS) system within turbid estuarine waters, in Chesapeake Bay. When compared to images under laboratory controlled conditions or clear waters, images from highly turbid waters are often of relatively low quality and more variable, due to the large amount of objects and nonlinear illumination within each image. We first customized a segmentation procedure to locate objects within each image and extracted them for classification. A maximally stable extremal regions algorithm was applied to segment large gelatinous zooplankton and an adaptive threshold approach was developed to segment small organisms, such as copepods. Unlike the existing approaches for images acquired from laboratory, controlled conditions or clear waters, the target objects are often the majority class, and the classification can be treated as a multi-class classification problem. We customized a two-level hierarchical classification procedure using support vector machines to classify the target objects (< 5%), and remove the non-target objects (> 95%). First, histograms of oriented gradients feature descriptors were constructed for the segmented objects. In the first step all non-target and target objects were classified into different groups: arrow-like, copepod-like, and gelatinous zooplankton. Each object was passed to a group-specific classifier to remove most non-target objects. After the object was classified, an expert or non-expert then manually removed the non-target objects that could not be removed by the procedure. The procedure was tested on 89,419 images collected in Chesapeake Bay, and results were consistent with visual counts with >80% accuracy for all three groups. PMID:26010260

  4. Chromatic Modulator for High Resolution CCD or APS Devices

    NASA Technical Reports Server (NTRS)

    Hartley, Frank T. (Inventor); Hull, Anthony B. (Inventor)

    2003-01-01

    A system for providing high-resolution color separation in electronic imaging. Comb drives controllably oscillate a red-green-blue (RGB) color strip filter system (or otherwise) over an electronic imaging system such as a charge-coupled device (CCD) or active pixel sensor (APS). The color filter is modulated over the imaging array at a rate three or more times the frame rate of the imaging array. In so doing, the underlying active imaging elements are then able to detect separate color-separated images, which are then combined to provide a color-accurate frame which is then recorded as the representation of the recorded image. High pixel resolution is maintained. Registration is obtained between the color strip filter and the underlying imaging array through the use of electrostatic comb drives in conjunction with a spring suspension system.

  5. High data volume and transfer rate techniques used at NASA's image processing facility

    NASA Technical Reports Server (NTRS)

    Heffner, P.; Connell, E.; Mccaleb, F.

    1978-01-01

    Data storage and transfer operations at a new image processing facility are described. The equipment includes high density digital magnetic tape drives and specially designed controllers to provide an interface between the tape drives and computerized image processing systems. The controller performs the functions necessary to convert the continuous serial data stream from the tape drive to a word-parallel blocked data stream which then goes to the computer-based system. With regard to the tape packing density, 1.8 times 10 to the tenth data bits are stored on a reel of one-inch tape. System components and their operation are surveyed, and studies on advanced storage techniques are summarized.

  6. Detection technique of targets for missile defense system

    NASA Astrophysics Data System (ADS)

    Guo, Hua-ling; Deng, Jia-hao; Cai, Ke-rong

    2009-11-01

    Ballistic missile defense system (BMDS) is a weapon system for intercepting enemy ballistic missiles. It includes ballistic-missile warning system, target discrimination system, anti-ballistic-missile guidance systems, and command-control communication system. Infrared imaging detection and laser imaging detection are widely used in BMDS for surveillance, target detection, target tracking, and target discrimination. Based on a comprehensive review of the application of target-detection techniques in the missile defense system, including infrared focal plane arrays (IRFPA), ground-based radar detection technology, 3-dimensional imaging laser radar with a photon counting avalanche photodiode (APD) arrays and microchip laser, this paper focuses on the infrared and laser imaging detection techniques in missile defense system, as well as the trends for their future development.

  7. [A skin cell segregating control system based on PC].

    PubMed

    Liu, Wen-zhong; Zhou, Ming; Zhang, Hong-bing

    2005-11-01

    A skin cell segregating control system based on PC (personal computer) is presented in this paper. Its front controller is a single-chip microcomputer which enables the manipulation for 6 patients simultaneously, and thus provides a great convenience for clinical treatments for vitiligo. With the use of serial port communication technology, it's possible to monitor and control the front controller in a PC terminal. And the application of computer image acquisition technology realizes the synchronous acquisition of pathologic shin cell images pre/after the operation and a case history. Clinical tests prove its conformity with national standards and the pre-set technological requirements.

  8. Design and evaluation of an optical fine-pointing control system for telescopes utilizing a digital star sensor

    NASA Technical Reports Server (NTRS)

    Ostroff, A. J.; Romanczyk, K. C.

    1973-01-01

    One of the most significant problems associated with the development of large orbiting astronomical telescopes is that of maintaining the very precise pointing accuracy required. A proposed solution to this problem utilizes dual-level pointing control. The primary control system maintains the telescope structure attitude stabilized within the field of view to the desired accuracy. In order to demonstrate the feasibility of optically stabilizing the star images to the desired accuracy a regulating system has been designed and evaluated. The control system utilizes a digital star sensor and an optical star image motion compensator, both of which have been developed for this application. These components have been analyzed mathematically, analytical models have been developed, and hardware has been built and tested.

  9. Fabrication of the pinhole aperture for AdaptiSPECT

    PubMed Central

    Kovalsky, Stephen; Kupinski, Matthew A.; Barrett, Harrison H.; Furenlid, Lars R.

    2015-01-01

    AdaptiSPECT is a pre-clinical pinhole SPECT imaging system under final construction at the Center for Gamma-Ray Imaging. The system is designed to be able to autonomously change its imaging configuration. The system comprises 16 detectors mounted on translational stages to move radially away and towards the center of the field-of-view. The system also possesses an adaptive pinhole aperture with multiple collimator diameters and pinhole sizes, as well as the possibility to switch between multiplexed and non-multiplexed imaging configurations. In this paper, we describe the fabrication of the AdaptiSPECT pinhole aperture and its controllers. PMID:26146443

  10. Video Imaging System Particularly Suited for Dynamic Gear Inspection

    NASA Technical Reports Server (NTRS)

    Broughton, Howard (Inventor)

    1999-01-01

    A digital video imaging system that captures the image of a single tooth of interest of a rotating gear is disclosed. The video imaging system detects the complete rotation of the gear and divide that rotation into discrete time intervals so that each tooth of interest of the gear is precisely determined when it is at a desired location that is illuminated in unison with a digital video camera so as to record a single digital image for each tooth. The digital images are available to provide instantaneous analysis of the tooth of interest, or to be stored and later provide images that yield a history that may be used to predict gear failure, such as gear fatigue. The imaging system is completely automated by a controlling program so that it may run for several days acquiring images without supervision from the user.

  11. Magnetic Nanoparticles for Multi-Imaging and Drug Delivery

    PubMed Central

    Lee, Jae-Hyun; Kim, Ji-wook; Cheon, Jinwoo

    2013-01-01

    Various bio-medical applications of magnetic nanoparticles have been explored during the past few decades. As tools that hold great potential for advancing biological sciences, magnetic nanoparticles have been used as platform materials for enhanced magnetic resonance imaging (MRI) agents, biological separation and magnetic drug delivery systems, and magnetic hyperthermia treatment. Furthermore, approaches that integrate various imaging and bioactive moieties have been used in the design of multi-modality systems, which possess synergistically enhanced properties such as better imaging resolution and sensitivity, molecular recognition capabilities, stimulus responsive drug delivery with on-demand control, and spatio-temporally controlled cell signal activation. Below, recent studies that focus on the design and synthesis of multi-mode magnetic nanoparticles will be briefly reviewed and their potential applications in the imaging and therapy areas will be also discussed. PMID:23579479

  12. Edge smoothing for real-time simulation of a polygon face object system as viewed by a moving observer

    NASA Technical Reports Server (NTRS)

    Lotz, Robert W. (Inventor); Westerman, David J. (Inventor)

    1980-01-01

    The visual system within an aircraft flight simulation system receives flight data and terrain data which is formated into a buffer memory. The image data is forwarded to an image processor which translates the image data into face vertex vectors Vf, defining the position relationship between the vertices of each terrain object and the aircraft. The image processor then rotates, clips, and projects the image data into two-dimensional display vectors (Vd). A display generator receives the Vd faces, and other image data to provide analog inputs to CRT devices which provide the window displays for the simulated aircraft. The video signal to the CRT devices passes through an edge smoothing device which prolongs the rise time (and fall time) of the video data inversely as the slope of the edge being smoothed. An operational amplifier within the edge smoothing device has a plurality of independently selectable feedback capacitors each having a different value. The values of the capacitors form a series which doubles as a power of two. Each feedback capacitor has a fast switch responsive to the corresponding bit of a digital binary control word for selecting (1) or not selecting (0) that capacitor. The control word is determined by the slope of each edge. The resulting actual feedback capacitance for each edge is the sum of all the selected capacitors and is directly proportional to the value of the binary control word. The output rise time (or fall time) is a function of the feedback capacitance, and is controlled by the slope through the binary control word.

  13. Smartphone Cortex Controlled Real-Time Image Processing and Reprocessing for Concentration Independent LED Induced Fluorescence Detection in Capillary Electrophoresis.

    PubMed

    Szarka, Mate; Guttman, Andras

    2017-10-17

    We present the application of a smartphone anatomy based technology in the field of liquid phase bioseparations, particularly in capillary electrophoresis. A simple capillary electrophoresis system was built with LED induced fluorescence detection and a credit card sized minicomputer to prove the concept of real time fluorescent imaging (zone adjustable time-lapse fluorescence image processor) and separation controller. The system was evaluated by analyzing under- and overloaded aminopyrenetrisulfonate (APTS)-labeled oligosaccharide samples. The open source software based image processing tool allowed undistorted signal modulation (reprocessing) if the signal was inappropriate for the actual detection system settings (too low or too high). The novel smart detection tool for fluorescently labeled biomolecules greatly expands dynamic range and enables retrospective correction for injections with unsuitable signal levels without the necessity to repeat the analysis.

  14. Real-Time Imaging System for the OpenPET

    NASA Astrophysics Data System (ADS)

    Tashima, Hideaki; Yoshida, Eiji; Kinouchi, Shoko; Nishikido, Fumihiko; Inadama, Naoko; Murayama, Hideo; Suga, Mikio; Haneishi, Hideaki; Yamaya, Taiga

    2012-02-01

    The OpenPET and its real-time imaging capability have great potential for real-time tumor tracking in medical procedures such as biopsy and radiation therapy. For the real-time imaging system, we intend to use the one-pass list-mode dynamic row-action maximum likelihood algorithm (DRAMA) and implement it using general-purpose computing on graphics processing units (GPGPU) techniques. However, it is difficult to make consistent reconstructions in real-time because the amount of list-mode data acquired in PET scans may be large depending on the level of radioactivity, and the reconstruction speed depends on the amount of the list-mode data. In this study, we developed a system to control the data used in the reconstruction step while retaining quantitative performance. In the proposed system, the data transfer control system limits the event counts to be used in the reconstruction step according to the reconstruction speed, and the reconstructed images are properly intensified by using the ratio of the used counts to the total counts. We implemented the system on a small OpenPET prototype system and evaluated the performance in terms of the real-time tracking ability by displaying reconstructed images in which the intensity was compensated. The intensity of the displayed images correlated properly with the original count rate and a frame rate of 2 frames per second was achieved with average delay time of 2.1 s.

  15. Development and Application of Stable Phantoms for the Evaluation of Photoacoustic Imaging Instruments

    PubMed Central

    Bohndiek, Sarah E.; Bodapati, Sandhya; Van De Sompel, Dominique; Kothapalli, Sri-Rajasekhar; Gambhir, Sanjiv S.

    2013-01-01

    Photoacoustic imaging combines the high contrast of optical imaging with the spatial resolution and penetration depth of ultrasound. This technique holds tremendous potential for imaging in small animals and importantly, is clinically translatable. At present, there is no accepted standard physical phantom that can be used to provide routine quality control and performance evaluation of photoacoustic imaging instruments. With the growing popularity of the technique and the advent of several commercial small animal imaging systems, it is important to develop a strategy for assessment of such instruments. Here, we developed a protocol for fabrication of physical phantoms for photoacoustic imaging from polyvinyl chloride plastisol (PVCP). Using this material, we designed and constructed a range of phantoms by tuning the optical properties of the background matrix and embedding spherical absorbing targets of the same material at different depths. We created specific designs to enable: routine quality control; the testing of robustness of photoacoustic signals as a function of background; and the evaluation of the maximum imaging depth available. Furthermore, we demonstrated that we could, for the first time, evaluate two small animal photoacoustic imaging systems with distinctly different light delivery, ultrasound imaging geometries and center frequencies, using stable physical phantoms and directly compare the results from both systems. PMID:24086557

  16. Image-based tracking system for vibration measurement of a rotating object using a laser scanning vibrometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Dongkyu, E-mail: akein@gist.ac.kr; Khalil, Hossam; Jo, Youngjoon

    2016-06-28

    An image-based tracking system using laser scanning vibrometer is developed for vibration measurement of a rotating object. The proposed system unlike a conventional one can be used where the position or velocity sensor such as an encoder cannot be attached to an object. An image processing algorithm is introduced to detect a landmark and laser beam based on their colors. Then, through using feedback control system, the laser beam can track a rotating object.

  17. Development of Dynamic Spatial Video Camera (DSVC) for 4D observation, analysis and modeling of human body locomotion.

    PubMed

    Suzuki, Naoki; Hattori, Asaki; Hayashibe, Mitsuhiro; Suzuki, Shigeyuki; Otake, Yoshito

    2003-01-01

    We have developed an imaging system for free and quantitative observation of human locomotion in a time-spatial domain by way of real time imaging. The system is equipped with 60 computer controlled video cameras to film human locomotion from all angles simultaneously. Images are installed into the main graphic workstation and translated into a 2D image matrix. Observation of the subject from optional directions is able to be performed by selecting the view point from the optimum image sequence in this image matrix. This system also possesses a function to reconstruct 4D models of the subject's moving human body by using 60 images taken from all directions at one particular time. And this system also has the capability to visualize inner structures such as the skeletal or muscular systems of the subject by compositing computer graphics reconstructed from the MRI data set. We are planning to apply this imaging system to clinical observation in the area of orthopedics, rehabilitation and sports science.

  18. Using the ATL HDI 1000 to collect demodulated RF data for monitoring HIFU lesion formation

    NASA Astrophysics Data System (ADS)

    Anand, Ajay; Kaczkowski, Peter J.; Daigle, Ron E.; Huang, Lingyun; Paun, Marla; Beach, Kirk W.; Crum, Lawrence A.

    2003-05-01

    The ability to accurately track and monitor the progress of lesion formation during HIFU (High Intensity Focused Ultrasound) therapy is important for the success of HIFU-based treatment protocols. To aid in the development of algorithms for accurately targeting and monitoring formation of HIFU induced lesions, we have developed a software system to perform RF data acquisition during HIFU therapy using a commercially available clinical ultrasound scanner (ATL HDI 1000, Philips Medical Systems, Bothell, WA). The HDI 1000 scanner functions on a software dominant architecture, permitting straightforward external control of its operation and relatively easy access to quadrature demodulated RF data. A PC running a custom developed program sends control signals to the HIFU module via GPIB and to the HDI 1000 via Telnet, alternately interleaving HIFU exposures and RF frame acquisitions. The system was tested during experiments in which HIFU lesions were created in excised animal tissue. No crosstalk between the HIFU beam and the ultrasound imager was detected, thus demonstrating synchronization. Newly developed acquisition modes allow greater user control in setting the image geometry and scanline density, and enables high frame rate acquisition. This system facilitates rapid development of signal-processing based HIFU therapy monitoring algorithms and their implementation in image-guided thermal therapy systems. In addition, the HDI 1000 system can be easily customized for use with other emerging imaging modalities that require access to the RF data such as elastographic methods and new Doppler-based imaging and tissue characterization techniques.

  19. Integration of LDSE and LTVS logs with HIPAA compliant auditing system (HCAS)

    NASA Astrophysics Data System (ADS)

    Zhou, Zheng; Liu, Brent J.; Huang, H. K.; Guo, Bing; Documet, Jorge; King, Nelson

    2006-03-01

    The deadline of HIPAA (Health Insurance Portability and Accountability Act) Security Rules has passed on February 2005; therefore being HIPAA compliant becomes extremely critical to healthcare providers. HIPAA mandates healthcare providers to protect the privacy and integrity of the health data and have the ability to demonstrate examples of mechanisms that can be used to accomplish this task. It is also required that a healthcare institution must be able to provide audit trails on image data access on demand for a specific patient. For these reasons, we have developed a HIPAA compliant auditing system (HCAS) for image data security in a PACS by auditing every image data access. The HCAS was presented in 2005 SPIE. This year, two new components, LDSE (Lossless Digital Signature Embedding) and LTVS (Patient Location Tracking and Verification System) logs, have been added to the HCAS. The LDSE can assure medical image integrity in a PACS, while the LTVS can provide access control for a PACS by creating a security zone in the clinical environment. By integrating the LDSE and LTVS logs with the HCAS, the privacy and integrity of image data can be audited as well. Thus, a PACS with the HCAS installed can become HIPAA compliant in image data privacy and integrity, access control, and audit control.

  20. Controllable 3D Display System Based on Frontal Projection Lenticular Screen

    NASA Astrophysics Data System (ADS)

    Feng, Q.; Sang, X.; Yu, X.; Gao, X.; Wang, P.; Li, C.; Zhao, T.

    2014-08-01

    A novel auto-stereoscopic three-dimensional (3D) projection display system based on the frontal projection lenticular screen is demonstrated. It can provide high real 3D experiences and the freedom of interaction. In the demonstrated system, the content can be changed and the dense of viewing points can be freely adjusted according to the viewers' demand. The high dense viewing points can provide smooth motion parallax and larger image depth without blurry. The basic principle of stereoscopic display is described firstly. Then, design architectures including hardware and software are demonstrated. The system consists of a frontal projection lenticular screen, an optimally designed projector-array and a set of multi-channel image processors. The parameters of the frontal projection lenticular screen are based on the demand of viewing such as the viewing distance and the width of view zones. Each projector is arranged on an adjustable platform. The set of multi-channel image processors are made up of six PCs. One of them is used as the main controller, the other five client PCs can process 30 channel signals and transmit them to the projector-array. Then a natural 3D scene will be perceived based on the frontal projection lenticular screen with more than 1.5 m image depth in real time. The control section is presented in detail, including parallax adjustment, system synchronization, distortion correction, etc. Experimental results demonstrate the effectiveness of this novel controllable 3D display system.

  1. A CMOS high speed imaging system design based on FPGA

    NASA Astrophysics Data System (ADS)

    Tang, Hong; Wang, Huawei; Cao, Jianzhong; Qiao, Mingrui

    2015-10-01

    CMOS sensors have more advantages than traditional CCD sensors. The imaging system based on CMOS has become a hot spot in research and development. In order to achieve the real-time data acquisition and high-speed transmission, we design a high-speed CMOS imaging system on account of FPGA. The core control chip of this system is XC6SL75T and we take advantages of CameraLink interface and AM41V4 CMOS image sensors to transmit and acquire image data. AM41V4 is a 4 Megapixel High speed 500 frames per second CMOS image sensor with global shutter and 4/3" optical format. The sensor uses column parallel A/D converters to digitize the images. The CameraLink interface adopts DS90CR287 and it can convert 28 bits of LVCMOS/LVTTL data into four LVDS data stream. The reflected light of objects is photographed by the CMOS detectors. CMOS sensors convert the light to electronic signals and then send them to FPGA. FPGA processes data it received and transmits them to upper computer which has acquisition cards through CameraLink interface configured as full models. Then PC will store, visualize and process images later. The structure and principle of the system are both explained in this paper and this paper introduces the hardware and software design of the system. FPGA introduces the driven clock of CMOS. The data in CMOS is converted to LVDS signals and then transmitted to the data acquisition cards. After simulation, the paper presents a row transfer timing sequence of CMOS. The system realized real-time image acquisition and external controls.

  2. Hybrid architecture active wavefront sensing and control system, and method

    NASA Technical Reports Server (NTRS)

    Feinberg, Lee D. (Inventor); Dean, Bruce H. (Inventor); Hyde, Tristram T. (Inventor)

    2011-01-01

    According to various embodiments, provided herein is an optical system and method that can be configured to perform image analysis. The optical system can comprise a telescope assembly and one or more hybrid instruments. The one or more hybrid instruments can be configured to receive image data from the telescope assembly and perform a fine guidance operation and a wavefront sensing operation, simultaneously, on the image data received from the telescope assembly.

  3. Computer system for scanning tunneling microscope automation

    NASA Astrophysics Data System (ADS)

    Aguilar, M.; García, A.; Pascual, P. J.; Presa, J.; Santisteban, A.

    1987-03-01

    A computerized system for the automation of a scanning tunneling microscope is presented. It is based on an IBM personal computer (PC) either an XT or an AT, which performs the control, data acquisition and storage operations, displays the STM "images" in real time, and provides image processing tools for the restoration and analysis of data. It supports different data acquisition and control cards and image display cards. The software has been designed in a modular way to allow the replacement of these cards and other equipment improvements as well as the inclusion of user routines for data analysis.

  4. Automated hybridization/imaging device for fluorescent multiplex DNA sequencing

    DOEpatents

    Weiss, R.B.; Kimball, A.W.; Gesteland, R.F.; Ferguson, F.M.; Dunn, D.M.; Di Sera, L.J.; Cherry, J.L.

    1995-11-28

    A method is disclosed for automated multiplex sequencing of DNA with an integrated automated imaging hybridization chamber system. This system comprises an hybridization chamber device for mounting a membrane containing size-fractionated multiplex sequencing reaction products, apparatus for fluid delivery to the chamber device, imaging apparatus for light delivery to the membrane and image recording of fluorescence emanating from the membrane while in the chamber device, and programmable controller apparatus for controlling operation of the system. The multiplex reaction products are hybridized with a probe, the enzyme (such as alkaline phosphatase) is bound to a binding moiety on the probe, and a fluorogenic substrate (such as a benzothiazole derivative) is introduced into the chamber device by the fluid delivery apparatus. The enzyme converts the fluorogenic substrate into a fluorescent product which, when illuminated in the chamber device with a beam of light from the imaging apparatus, excites fluorescence of the fluorescent product to produce a pattern of hybridization. The pattern of hybridization is imaged by a CCD camera component of the imaging apparatus to obtain a series of digital signals. These signals are converted by the controller apparatus into a string of nucleotides corresponding to the nucleotide sequence an automated sequence reader. The method and apparatus are also applicable to other membrane-based applications such as colony and plaque hybridization and Southern, Northern, and Western blots. 9 figs.

  5. Automated hybridization/imaging device for fluorescent multiplex DNA sequencing

    DOEpatents

    Weiss, Robert B.; Kimball, Alvin W.; Gesteland, Raymond F.; Ferguson, F. Mark; Dunn, Diane M.; Di Sera, Leonard J.; Cherry, Joshua L.

    1995-01-01

    A method is disclosed for automated multiplex sequencing of DNA with an integrated automated imaging hybridization chamber system. This system comprises an hybridization chamber device for mounting a membrane containing size-fractionated multiplex sequencing reaction products, apparatus for fluid delivery to the chamber device, imaging apparatus for light delivery to the membrane and image recording of fluorescence emanating from the membrane while in the chamber device, and programmable controller apparatus for controlling operation of the system. The multiplex reaction products are hybridized with a probe, then an enzyme (such as alkaline phosphatase) is bound to a binding moiety on the probe, and a fluorogenic substrate (such as a benzothiazole derivative) is introduced into the chamber device by the fluid delivery apparatus. The enzyme converts the fluorogenic substrate into a fluorescent product which, when illuminated in the chamber device with a beam of light from the imaging apparatus, excites fluorescence of the fluorescent product to produce a pattern of hybridization. The pattern of hybridization is imaged by a CCD camera component of the imaging apparatus to obtain a series of digital signals. These signals are converted by the controller apparatus into a string of nucleotides corresponding to the nucleotide sequence an automated sequence reader. The method and apparatus are also applicable to other membrane-based applications such as colony and plaque hybridization and Southern, Northern, and Western blots.

  6. Intelligent bandwith compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 band width-compressed images are presented. A video tape simulation of the Intelligent Bandwidth Compression system has been produced using a sequence of video input from the data base.

  7. Design and realization of an AEC&AGC system for the CCD aerial camera

    NASA Astrophysics Data System (ADS)

    Liu, Hai ying; Feng, Bing; Wang, Peng; Li, Yan; Wei, Hao yun

    2015-08-01

    An AEC and AGC(Automatic Exposure Control and Automatic Gain Control) system was designed for a CCD aerial camera with fixed aperture and electronic shutter. The normal AEC and AGE algorithm is not suitable to the aerial camera since the camera always takes high-resolution photographs in high-speed moving. The AEC and AGE system adjusts electronic shutter and camera gain automatically according to the target brightness and the moving speed of the aircraft. An automatic Gamma correction is used before the image is output so that the image is better for watching and analyzing by human eyes. The AEC and AGC system could avoid underexposure, overexposure, or image blurring caused by fast moving or environment vibration. A series of tests proved that the system meet the requirements of the camera system with its fast adjusting speed, high adaptability, high reliability in severe complex environment.

  8. A linear shift-invariant image preprocessing technique for multispectral scanner systems

    NASA Technical Reports Server (NTRS)

    Mcgillem, C. D.; Riemer, T. E.

    1973-01-01

    A linear shift-invariant image preprocessing technique is examined which requires no specific knowledge of any parameter of the original image and which is sufficiently general to allow the effective radius of the composite imaging system to be arbitrarily shaped and reduced, subject primarily to the noise power constraint. In addition, the size of the point-spread function of the preprocessing filter can be arbitrarily controlled, thus minimizing truncation errors.

  9. Telerobotic Surgery: An Intelligent Systems Approach to Mitigate the Adverse Effects of Communication Delay. Chapter 4

    NASA Technical Reports Server (NTRS)

    Cardullo, Frank M.; Lewis, Harold W., III; Panfilov, Peter B.

    2007-01-01

    An extremely innovative approach has been presented, which is to have the surgeon operate through a simulator running in real-time enhanced with an intelligent controller component to enhance the safety and efficiency of a remotely conducted operation. The use of a simulator enables the surgeon to operate in a virtual environment free from the impediments of telecommunication delay. The simulator functions as a predictor and periodically the simulator state is corrected with truth data. Three major research areas must be explored in order to ensure achieving the objectives. They are: simulator as predictor, image processing, and intelligent control. Each is equally necessary for success of the project and each of these involves a significant intelligent component in it. These are diverse, interdisciplinary areas of investigation, thereby requiring a highly coordinated effort by all the members of our team, to ensure an integrated system. The following is a brief discussion of those areas. Simulator as a predictor: The delays encountered in remote robotic surgery will be greater than any encountered in human-machine systems analysis, with the possible exception of remote operations in space. Therefore, novel compensation techniques will be developed. Included will be the development of the real-time simulator, which is at the heart of our approach. The simulator will present real-time, stereoscopic images and artificial haptic stimuli to the surgeon. Image processing: Because of the delay and the possibility of insufficient bandwidth a high level of novel image processing is necessary. This image processing will include several innovative aspects, including image interpretation, video to graphical conversion, texture extraction, geometric processing, image compression and image generation at the surgeon station. Intelligent control: Since the approach we propose is in a sense predictor based, albeit a very sophisticated predictor, a controller, which not only optimizes end effector trajectory but also avoids error, is essential. We propose to investigate two different approaches to the controller design. One approach employs an optimal controller based on modern control theory; the other one involves soft computing techniques, i.e. fuzzy logic, neural networks, genetic algorithms and hybrids of these.

  10. Impedance-controlled ultrasound probe

    NASA Astrophysics Data System (ADS)

    Gilbertson, Matthew W.; Anthony, Brian W.

    2011-03-01

    An actuated hand-held impedance-controlled ultrasound probe has been developed. The controller maintains a prescribed contact state (force and velocity) between the probe and a patient's body. The device will enhance the diagnostic capability of free-hand elastography and swept-force compound imaging, and also make it easier for a technician to acquire repeatable (i.e. directly comparable) images over time. The mechanical system consists of an ultrasound probe, ball-screw-driven linear actuator, and a force/torque sensor. The feedback controller commands the motor to rotate the ball-screw to translate the ultrasound probe in order to maintain a desired contact force. It was found that users of the device, with the control system engaged, maintain a constant contact force with 15 times less variation than without the controller engaged. The system was used to determine the elastic properties of soft tissue.

  11. Automatic tracking of laparoscopic instruments for autonomous control of a cameraman robot.

    PubMed

    Khoiy, Keyvan Amini; Mirbagheri, Alireza; Farahmand, Farzam

    2016-01-01

    An automated instrument tracking procedure was designed and developed for autonomous control of a cameraman robot during laparoscopic surgery. The procedure was based on an innovative marker-free segmentation algorithm for detecting the tip of the surgical instruments in laparoscopic images. A compound measure of Saturation and Value components of HSV color space was incorporated that was enhanced further using the Hue component and some essential characteristics of the instrument segment, e.g., crossing the image boundaries. The procedure was then integrated into the controlling system of the RoboLens cameraman robot, within a triple-thread parallel processing scheme, such that the tip is always kept at the center of the image. Assessment of the performance of the system on prerecorded real surgery movies revealed an accuracy rate of 97% for high quality images and about 80% for those suffering from poor lighting and/or blood, water and smoke noises. A reasonably satisfying performance was also observed when employing the system for autonomous control of the robot in a laparoscopic surgery phantom, with a mean time delay of 200ms. It was concluded that with further developments, the proposed procedure can provide a practical solution for autonomous control of cameraman robots during laparoscopic surgery operations.

  12. Design Criteria For Networked Image Analysis System

    NASA Astrophysics Data System (ADS)

    Reader, Cliff; Nitteberg, Alan

    1982-01-01

    Image systems design is currently undergoing a metamorphosis from the conventional computing systems of the past into a new generation of special purpose designs. This change is motivated by several factors, notably among which is the increased opportunity for high performance with low cost offered by advances in semiconductor technology. Another key issue is a maturing in understanding of problems and the applicability of digital processing techniques. These factors allow the design of cost-effective systems that are functionally dedicated to specific applications and used in a utilitarian fashion. Following an overview of the above stated issues, the paper presents a top-down approach to the design of networked image analysis systems. The requirements for such a system are presented, with orientation toward the hospital environment. The three main areas are image data base management, viewing of image data and image data processing. This is followed by a survey of the current state of the art, covering image display systems, data base techniques, communications networks and software systems control. The paper concludes with a description of the functional subystems and architectural framework for networked image analysis in a production environment.

  13. Note: Design and implementation of a home-built imaging system with low jitter for cold atom experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hachtel, A. J.; Gillette, M. C.; Clements, E. R.

    A novel home-built system for imaging cold atom samples is presented using a readily available astronomy camera which has the requisite sensitivity but no timing-control. We integrate the camera with LabVIEW achieving fast, low-jitter imaging with a convenient user-defined interface. We show that our system takes precisely timed millisecond exposures and offers significant improvements in terms of system jitter and readout time over previously reported home-built systems. Our system rivals current commercial “black box” systems in performance and user-friendliness.

  14. Design and analysis of x-ray vision systems for high-speed detection of foreign body contamination in food

    NASA Astrophysics Data System (ADS)

    Graves, Mark; Smith, Alexander; Batchelor, Bruce G.; Palmer, Stephen C.

    1994-10-01

    In the food industry there is an ever increasing need to control and monitor food quality. In recent years fully automated x-ray inspection systems have been used to detect food on-line for foreign body contamination. These systems involve a complex integration of x- ray imaging components with state of the art high speed image processing. The quality of the x-ray image obtained by such systems is very poor compared with images obtained from other inspection processes, this makes reliable detection of very small, low contrast defects extremely difficult. It is therefore extremely important to optimize the x-ray imaging components to give the very best image possible. In this paper we present a method of analyzing the x-ray imaging system in order to consider the contrast obtained when viewing small defects.

  15. Cardiac Imaging System

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Although not available to all patients with narrowed arteries, balloon angioplasty has expanded dramatically since its introduction with an estimated further growth to 562,000 procedures in the U.S. alone by 1992. Growth has fueled demand for higher quality imaging systems that allow the cardiologist to be more accurate and increase the chances of a successful procedure. A major advance is the Digital Cardiac Imaging (DCI) System designed by Philips Medical Systems International, Best, The Netherlands and marketed in the U.S. by Philips Medical Systems North America Company. The key benefit is significantly improved real-time imaging and the ability to employ image enhancement techniques to bring out added details. Using a cordless control unit, the cardiologist can manipulate images to make immediate assessment, compare live x-ray and roadmap images by placing them side-by-side on monitor screens, or compare pre-procedure and post procedure conditions. The Philips DCI improves the cardiologist's precision by expanding the information available to him.

  16. Radionuclide bone imaging in the evaluation of osseous allograft systems. Scientific report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelly, J.F.; Cagle, J.D.; Stevenson, J.S.

    1975-02-01

    Evaluation of the progress of osteogenic activity in mandibular bone grafts in dogs by a noninvasive, nondestructive radionuclide method is feasible. The method provides a meaningful sequential interpretation of osseous repair more sensitive than conventional radiography. It is presumed that accumulating hydroxyapatite is being labelled by the imaging agent technetium diphosphonate. The osseous allograft systems studied were comparable to or exceeded autografts in their repair activity in mandibular discontinuity defects as judged by radionuclide imaging. A lyophilized mandibular allograft segment augmented with autologous cancellous marrow was more active than autograft controls at 3 and 6 weeks and was the mostmore » active system studied. Allograft segments augmented with lyophilized crushed cortical allogeneic bone particles were equal to controls at 3 weeks and more active than controls at 6 weeks. Lyophilized crushed cortical allogeneic bone particles retained in a Millipore filter while not clinically stable at 6 weeks did show osteogenic activity equal to control autografts at this interval. (GRA)« less

  17. A Real-Time Position-Locating Algorithm for CCD-Based Sunspot Tracking

    NASA Technical Reports Server (NTRS)

    Taylor, Jaime R.

    1996-01-01

    NASA Marshall Space Flight Center's (MSFC) EXperimental Vector Magnetograph (EXVM) polarimeter measures the sun's vector magnetic field. These measurements are taken to improve understanding of the sun's magnetic field in the hopes to better predict solar flares. Part of the procedure for the EXVM requires image motion stabilization over a period of a few minutes. A high speed tracker can be used to reduce image motion produced by wind loading on the EXVM, fluctuations in the atmosphere and other vibrations. The tracker consists of two elements, an image motion detector and a control system. The image motion detector determines the image movement from one frame to the next and sends an error signal to the control system. For the ground based application to reduce image motion due to atmospheric fluctuations requires an error determination at the rate of at least 100 hz. It would be desirable to have an error determination rate of 1 kHz to assure that higher rate image motion is reduced and to increase the control system stability. Two algorithms are presented that are typically used for tracking. These algorithms are examined for their applicability for tracking sunspots, specifically their accuracy if only one column and one row of CCD pixels are used. To examine the accuracy of this method two techniques are used. One involves moving a sunspot image a known distance with computer software, then applying the particular algorithm to see how accurately it determines this movement. The second technique involves using a rate table to control the object motion, then applying the algorithms to see how accurately each determines the actual motion. Results from these two techniques are presented.

  18. Development of Automated Tracking System with Active Cameras for Figure Skating

    NASA Astrophysics Data System (ADS)

    Haraguchi, Tomohiko; Taki, Tsuyoshi; Hasegawa, Junichi

    This paper presents a system based on the control of PTZ cameras for automated real-time tracking of individual figure skaters moving on an ice rink. In the video images of figure skating, irregular trajectories, various postures, rapid movements, and various costume colors are included. Therefore, it is difficult to determine some features useful for image tracking. On the other hand, an ice rink has a limited area and uniform high intensity, and skating is always performed on ice. In the proposed system, an ice rink region is first extracted from a video image by the region growing method, and then, a skater region is extracted using the rink shape information. In the camera control process, each camera is automatically panned and/or tilted so that the skater region is as close to the center of the image as possible; further, the camera is zoomed to maintain the skater image at an appropriate scale. The results of experiments performed for 10 training scenes show that the skater extraction rate is approximately 98%. Thus, it was concluded that tracking with camera control was successful for almost all the cases considered in the study.

  19. Applying face identification to detecting hijacking of airplane

    NASA Astrophysics Data System (ADS)

    Luo, Xuanwen; Cheng, Qiang

    2004-09-01

    That terrorists hijacked the airplanes and crashed the World Trade Center is disaster to civilization. To avoid the happening of hijack is critical to homeland security. To report the hijacking in time, limit the terrorist to operate the plane if happened and land the plane to the nearest airport could be an efficient way to avoid the misery. Image processing technique in human face recognition or identification could be used for this task. Before the plane take off, the face images of pilots are input into a face identification system installed in the airplane. The camera in front of pilot seat keeps taking the pilot face image during the flight and comparing it with pre-input pilot face images. If a different face is detected, a warning signal is sent to ground automatically. At the same time, the automatic cruise system is started or the plane is controlled by the ground. The terrorists will have no control over the plane. The plane will be landed to a nearest or appropriate airport under the control of the ground or cruise system. This technique could also be used in automobile industry as an image key to avoid car stealth.

  20. Adaptive optical filter

    DOEpatents

    Whittemore, Stephen Richard

    2013-09-10

    Imaging systems include a detector and a spatial light modulator (SLM) that is coupled so as to control image intensity at the detector based on predetermined detector limits. By iteratively adjusting SLM element values, image intensity at one or all detector elements or portions of an imaging detector can be controlled to be within limits. The SLM can be secured to the detector at a spacing such that the SLM is effectively at an image focal plane. In some applications, the SLM can be adjusted to impart visible or hidden watermarks to images or to reduce image intensity at one or a selected set of detector elements so as to reduce detector blooming

  1. Wavefront control system for the Keck telescope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brase, J. M., LLNL

    1998-03-01

    The laser guide star adaptive optics system currently being developed for the Keck 2 telescope consists of several major subsystems: the optical bench, wavefront control, user interface and supervisory control, and the laser system. The paper describes the design and implementation of the wavefront control subsystem that controls a 349 actuator deformable mirror for high order correction and tip-tilt mirrors for stabilizing the image and laser positions.

  2. Pixel Perfect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perrine, Kenneth A.; Hopkins, Derek F.; Lamarche, Brian L.

    2005-09-01

    Biologists and computer engineers at Pacific Northwest National Laboratory have specified, designed, and implemented a hardware/software system for performing real-time, multispectral image processing on a confocal microscope. This solution is intended to extend the capabilities of the microscope, enabling scientists to conduct advanced experiments on cell signaling and other kinds of protein interactions. FRET (fluorescence resonance energy transfer) techniques are used to locate and monitor protein activity. In FRET, it is critical that spectral images be precisely aligned with each other despite disturbances in the physical imaging path caused by imperfections in lenses and cameras, and expansion and contraction ofmore » materials due to temperature changes. The central importance of this work is therefore automatic image registration. This runs in a framework that guarantees real-time performance (processing pairs of 1024x1024, 8-bit images at 15 frames per second) and enables the addition of other types of advanced image processing algorithms such as image feature characterization. The supporting system architecture consists of a Visual Basic front-end containing a series of on-screen interfaces for controlling various aspects of the microscope and a script engine for automation. One of the controls is an ActiveX component written in C++ for handling the control and transfer of images. This component interfaces with a pair of LVDS image capture boards and a PCI board containing a 6-million gate Xilinx Virtex-II FPGA. Several types of image processing are performed on the FPGA in a pipelined fashion, including the image registration. The FPGA offloads work that would otherwise need to be performed by the main CPU and has a guaranteed real-time throughput. Image registration is performed in the FPGA by applying a cubic warp on one image to precisely align it with the other image. Before each experiment, an automated calibration procedure is run in order to set up the cubic warp. During image acquisitions, the cubic warp is evaluated by way of forward differencing. Unwanted pixelation artifacts are minimized by bilinear sampling. The resulting system is state-of-the-art for biological imaging. Precisely registered images enable the reliable use of FRET techniques. In addition, real-time image processing performance allows computed images to be fed back and displayed to scientists immediately, and the pipelined nature of the FPGA allows additional image processing algorithms to be incorporated into the system without slowing throughput.« less

  3. [The dilemma of data flood - reducing costs and increasing quality control].

    PubMed

    Gassmann, B

    2012-09-05

    Digitization is found everywhere in sonography. Printing of ultrasound images using the videoprinter with special paper will be done in single cases. The documentation of sonography procedures is more and more done by saving image sequences instead of still frames. Echocardiography is routinely recorded in between with so called R-R-loops. Doing contrast enhanced ultrasound recording of sequences is necessary to get a deep impression of the vascular structure of interest. Working with this data flood in daily practice a specialized software is required. Comparison in follow up of stored and recent images/sequences is very helpful. Nevertheless quality control of the ultrasound system and the transducers is simple and safe - using a phantom for detail resolution and general image quality the stored images/sequences are comparable over the life cycle of the system. The comparison in follow up is showing decreased image quality and transducer defects immediately.

  4. Spread spectrum image steganography.

    PubMed

    Marvel, L M; Boncelet, C R; Retter, C T

    1999-01-01

    In this paper, we present a new method of digital steganography, entitled spread spectrum image steganography (SSIS). Steganography, which means "covered writing" in Greek, is the science of communicating in a hidden manner. Following a discussion of steganographic communication theory and review of existing techniques, the new method, SSIS, is introduced. This system hides and recovers a message of substantial length within digital imagery while maintaining the original image size and dynamic range. The hidden message can be recovered using appropriate keys without any knowledge of the original image. Image restoration, error-control coding, and techniques similar to spread spectrum are described, and the performance of the system is illustrated. A message embedded by this method can be in the form of text, imagery, or any other digital signal. Applications for such a data-hiding scheme include in-band captioning, covert communication, image tamperproofing, authentication, embedded control, and revision tracking.

  5. Noncontact optical motion sensing for real-time analysis

    NASA Astrophysics Data System (ADS)

    Fetzer, Bradley R.; Imai, Hiromichi

    1990-08-01

    The adaptation of an image dissector tube (IDT) within the OPTFOLLOW system provides high resolution displacement measurement of a light discontinuity. Due to the high speed response of the IDT and the advanced servo loop circuitry, the system is capable of real time analysis of the object under test. The image of the discontinuity may be contoured by direct or reflected light and ranges spectrally within the field of visible light. The image is monitored to 500 kHz through a lens configuration which transposes the optical image upon the photocathode of the IDT. The photoelectric effect accelerates the resultant electrons through a photomultiplier and an enhanced current is emitted from the anode. A servo loop controls the electron beam, continually centering it within the IDT using magnetic focusing of deflection coils. The output analog voltage from the servo amplifier is thereby proportional to the displacement of the target. The system is controlled by a microprocessor with a 32kbyte memory and provides a digital display as well as instructional readout on a color monitor allowing for offset image tracking and automatic system calibration.

  6. Compact Microscope Imaging System With Intelligent Controls Improved

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2004-01-01

    The Compact Microscope Imaging System (CMIS) with intelligent controls is a diagnostic microscope analysis tool with intelligent controls for use in space, industrial, medical, and security applications. This compact miniature microscope, which can perform tasks usually reserved for conventional microscopes, has unique advantages in the fields of microscopy, biomedical research, inline process inspection, and space science. Its unique approach integrates a machine vision technique with an instrumentation and control technique that provides intelligence via the use of adaptive neural networks. The CMIS system was developed at the NASA Glenn Research Center specifically for interface detection used for colloid hard spheres experiments; biological cell detection for patch clamping, cell movement, and tracking; and detection of anode and cathode defects for laboratory samples using microscope technology.

  7. Magnetic quadrupoles lens for hot spot proton imaging in inertial confinement fusion

    NASA Astrophysics Data System (ADS)

    Teng, J.; Gu, Y. Q.; Chen, J.; Zhu, B.; Zhang, B.; Zhang, T. K.; Tan, F.; Hong, W.; Zhang, B. H.; Wang, X. Q.

    2016-08-01

    Imaging of DD-produced protons from an implosion hot spot region by miniature permanent magnetic quadrupole (PMQ) lens is proposed. Corresponding object-image relation is deduced and an adjust method for this imaging system is discussed. Ideal point-to-point imaging demands a monoenergetic proton source; nevertheless, we proved that the blur of image induced by proton energy spread is a second order effect therefore controllable. A proton imaging system based on miniature PMQ lens is designed for 2.8 MeV DD-protons and the adjust method in case of proton energy shift is proposed. The spatial resolution of this system is better than 10 μm when proton yield is above 109 and the spectra width is within 10%.

  8. Computer-aided system for detecting runway incursions

    NASA Astrophysics Data System (ADS)

    Sridhar, Banavar; Chatterji, Gano B.

    1994-07-01

    A synthetic vision system for enhancing the pilot's ability to navigate and control the aircraft on the ground is described. The system uses the onboard airport database and images acquired by external sensors. Additional navigation information needed by the system is provided by the Inertial Navigation System and the Global Positioning System. The various functions of the system, such as image enhancement, map generation, obstacle detection, collision avoidance, guidance, etc., are identified. The available technologies, some of which were developed at NASA, that are applicable to the aircraft ground navigation problem are noted. Example images of a truck crossing the runway while the aircraft flies close to the runway centerline are described. These images are from a sequence of images acquired during one of the several flight experiments conducted by NASA to acquire data to be used for the development and verification of the synthetic vision concepts. These experiments provide a realistic database including video and infrared images, motion states from the Inertial Navigation System and the Global Positioning System, and camera parameters.

  9. Design of CMOS imaging system based on FPGA

    NASA Astrophysics Data System (ADS)

    Hu, Bo; Chen, Xiaolai

    2017-10-01

    In order to meet the needs of engineering applications for high dynamic range CMOS camera under the rolling shutter mode, a complete imaging system is designed based on the CMOS imaging sensor NSC1105. The paper decides CMOS+ADC+FPGA+Camera Link as processing architecture and introduces the design and implementation of the hardware system. As for camera software system, which consists of CMOS timing drive module, image acquisition module and transmission control module, the paper designs in Verilog language and drives it to work properly based on Xilinx FPGA. The ISE 14.6 emulator ISim is used in the simulation of signals. The imaging experimental results show that the system exhibits a 1280*1024 pixel resolution, has a frame frequency of 25 fps and a dynamic range more than 120dB. The imaging quality of the system satisfies the requirement of the index.

  10. Computational imaging with a single-pixel detector and a consumer video projector

    NASA Astrophysics Data System (ADS)

    Sych, D.; Aksenov, M.

    2018-02-01

    Single-pixel imaging is a novel rapidly developing imaging technique that employs spatially structured illumination and a single-pixel detector. In this work, we experimentally demonstrate a fully operating modular single-pixel imaging system. Light patterns in our setup are created with help of a computer-controlled digital micromirror device from a consumer video projector. We investigate how different working modes and settings of the projector affect the quality of reconstructed images. We develop several image reconstruction algorithms and compare their performance for real imaging. Also, we discuss the potential use of the single-pixel imaging system for quantum applications.

  11. [Design of the image browser for PACS image workstation].

    PubMed

    Li, Feng; Zhou, He-Qin

    2006-09-01

    The design of PACS image workstation based on DICOM3.0 is introduced in the paper, then the designing method of the PACS image browser based on the control system theory is presented,focusing on two main units:DICOM analyzer and the information mapping transformer.

  12. 36 CFR § 1194.21 - Software applications and operating systems.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... an image represents a program element, the information conveyed by the image must also be available in text. (e) When bitmap images are used to identify controls, status indicators, or other programmatic elements, the meaning assigned to those images shall be consistent throughout an application's...

  13. 36 CFR 1194.21 - Software applications and operating systems.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... an image represents a program element, the information conveyed by the image must also be available in text. (e) When bitmap images are used to identify controls, status indicators, or other programmatic elements, the meaning assigned to those images shall be consistent throughout an application's...

  14. 36 CFR 1194.21 - Software applications and operating systems.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... an image represents a program element, the information conveyed by the image must also be available in text. (e) When bitmap images are used to identify controls, status indicators, or other programmatic elements, the meaning assigned to those images shall be consistent throughout an application's...

  15. 36 CFR 1194.21 - Software applications and operating systems.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... an image represents a program element, the information conveyed by the image must also be available in text. (e) When bitmap images are used to identify controls, status indicators, or other programmatic elements, the meaning assigned to those images shall be consistent throughout an application's...

  16. 36 CFR 1194.21 - Software applications and operating systems.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... an image represents a program element, the information conveyed by the image must also be available in text. (e) When bitmap images are used to identify controls, status indicators, or other programmatic elements, the meaning assigned to those images shall be consistent throughout an application's...

  17. No scanning depth imaging system based on TOF

    NASA Astrophysics Data System (ADS)

    Sun, Rongchun; Piao, Yan; Wang, Yu; Liu, Shuo

    2016-03-01

    To quickly obtain a 3D model of real world objects, multi-point ranging is very important. However, the traditional measuring method usually adopts the principle of point by point or line by line measurement, which is too slow and of poor efficiency. In the paper, a no scanning depth imaging system based on TOF (time of flight) was proposed. The system is composed of light source circuit, special infrared image sensor module, processor and controller of image data, data cache circuit, communication circuit, and so on. According to the working principle of the TOF measurement, image sequence was collected by the high-speed CMOS sensor, and the distance information was obtained by identifying phase difference, and the amplitude image was also calculated. Experiments were conducted and the experimental results show that the depth imaging system can achieve no scanning depth imaging function with good performance.

  18. Intelligent image capture of cartridge cases for firearms examiners

    NASA Astrophysics Data System (ADS)

    Jones, Brett C.; Guerci, Joseph R.

    1997-02-01

    The FBI's DRUGFIRETM system is a nationwide computerized networked image database of ballistic forensic evidence. This evidence includes images of cartridge cases and bullets obtained from both crime scenes and controlled test firings of seized weapons. Currently, the system is installed in over 80 forensic labs across the country and has enjoyed a high degree of success. In this paper, we discuss some of the issues and methods associated with providing a front-end semi-automated image capture system that simultaneously satisfies the often conflicting criteria of the many human examiners visual perception versus the criteria associated with optimizing autonomous digital image correlation. Specifically, we detail the proposed processing chain of an intelligent image capture system (IICS), involving a real- time capture 'assistant,' which assesses the quality of the image under test utilizing a custom designed neural network.

  19. Design and implementation of a scene-dependent dynamically selfadaptable wavefront coding imaging system

    NASA Astrophysics Data System (ADS)

    Carles, Guillem; Ferran, Carme; Carnicer, Artur; Bosch, Salvador

    2012-01-01

    A computational imaging system based on wavefront coding is presented. Wavefront coding provides an extension of the depth-of-field at the expense of a slight reduction of image quality. This trade-off results from the amount of coding used. By using spatial light modulators, a flexible coding is achieved which permits it to be increased or decreased as needed. In this paper a computational method is proposed for evaluating the output of a wavefront coding imaging system equipped with a spatial light modulator, with the aim of thus making it possible to implement the most suitable coding strength for a given scene. This is achieved in an unsupervised manner, thus the whole system acts as a dynamically selfadaptable imaging system. The program presented here controls the spatial light modulator and the camera, and also processes the images in a synchronised way in order to implement the dynamic system in real time. A prototype of the system was implemented in the laboratory and illustrative examples of the performance are reported in this paper. Program summaryProgram title: DynWFC (Dynamic WaveFront Coding) Catalogue identifier: AEKC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 10 483 No. of bytes in distributed program, including test data, etc.: 2 437 713 Distribution format: tar.gz Programming language: Labview 8.5 and NI Vision and MinGW C Compiler Computer: Tested on PC Intel ® Pentium ® Operating system: Tested on Windows XP Classification: 18 Nature of problem: The program implements an enhanced wavefront coding imaging system able to adapt the degree of coding to the requirements of a specific scene. The program controls the acquisition by a camera, the display of a spatial light modulator and the image processing operations synchronously. The spatial light modulator is used to implement the phase mask with flexibility given the trade-off between depth-of-field extension and image quality achieved. The action of the program is to evaluate the depth-of-field requirements of the specific scene and subsequently control the coding established by the spatial light modulator, in real time.

  20. Neural imaging in songbirds using fiber optic fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Nooshabadi, Fatemeh; Hearn, Gentry; Lints, Thierry; Maitland, Kristen C.

    2012-02-01

    The song control system of juvenile songbirds is an important model for studying the developmental acquisition and generation of complex learned vocal motor sequences, two processes that are fundamental to human speech and language. To understand the neural mechanisms underlying song production, it is critical to characterize the activity of identified neurons in the song control system when the bird is singing. Neural imaging in unrestrained singing birds, although technically challenging, will advance our understanding of neural ensemble coding mechanisms in this system. We are exploring the use of a fiber optic microscope for functional imaging in the brain of behaving and singing birds in order to better understand the contribution of a key brain nucleus (high vocal center nucleus; HVC) to temporal aspects of song motor control. We have constructed a fluorescence microscope with LED illumination, a fiber bundle for transmission of fluorescence excitation and emission light, a ~2x GRIN lens, and a CCD for image acquisition. The system has 2 μm resolution, 375 μm field of view, 200 μm working distance, and 1 mm outer diameter. As an initial characterization of this setup, neurons in HVC were imaged using the fiber optic microscope after injection of quantum dots or fluorescent retrograde tracers into different song nuclei. A Lucid Vivascope confocal microscope was used to confirm the imaging results. Long-term imaging of the activity of these neurons in juvenile birds during singing may lead us to a better understanding of the central motor codes for song and the central mechanism by which auditory experience modifies song motor commands to enable vocal learning and imitation.

  1. A Mobile System for Measuring Water Surface Velocities Using Unmanned Aerial Vehicle and Large-Scale Particle Image Velocimetry

    NASA Astrophysics Data System (ADS)

    Chen, Y. L.

    2015-12-01

    Measurement technologies for velocity of river flow are divided into intrusive and nonintrusive methods. Intrusive method requires infield operations. The measuring process of intrusive methods are time consuming, and likely to cause damages of operator and instrument. Nonintrusive methods require fewer operators and can reduce instrument damages from directly attaching to the flow. Nonintrusive measurements may use radar or image velocimetry to measure the velocities at the surface of water flow. The image velocimetry, such as large scale particle image velocimetry (LSPIV) accesses not only the point velocity but the flow velocities in an area simultaneously. Flow properties of an area hold the promise of providing spatially information of flow fields. This study attempts to construct a mobile system UAV-LSPIV by using an unmanned aerial vehicle (UAV) with LSPIV to measure flows in fields. The mobile system consists of a six-rotor UAV helicopter, a Sony nex5T camera, a gimbal, an image transfer device, a ground station and a remote control device. The activate gimbal helps maintain the camera lens orthogonal to the water surface and reduce the extent of images being distorted. The image transfer device can monitor the captured image instantly. The operator controls the UAV by remote control device through ground station and can achieve the flying data such as flying height and GPS coordinate of UAV. The mobile system was then applied to field experiments. The deviation of velocities measured by UAV-LSPIV of field experiments and handhold Acoustic Doppler Velocimeter (ADV) is under 8%. The results of the field experiments suggests that the application of UAV-LSPIV can be effectively applied to surface flow studies.

  2. The variability of software scoring of the CDMAM phantom associated with a limited number of images

    NASA Astrophysics Data System (ADS)

    Yang, Chang-Ying J.; Van Metter, Richard

    2007-03-01

    Software scoring approaches provide an attractive alternative to human evaluation of CDMAM images from digital mammography systems, particularly for annual quality control testing as recommended by the European Protocol for the Quality Control of the Physical and Technical Aspects of Mammography Screening (EPQCM). Methods for correlating CDCOM-based results with human observer performance have been proposed. A common feature of all methods is the use of a small number (at most eight) of CDMAM images to evaluate the system. This study focuses on the potential variability in the estimated system performance that is associated with these methods. Sets of 36 CDMAM images were acquired under carefully controlled conditions from three different digital mammography systems. The threshold visibility thickness (TVT) for each disk diameter was determined using previously reported post-analysis methods from the CDCOM scorings for a randomly selected group of eight images for one measurement trial. This random selection process was repeated 3000 times to estimate the variability in the resulting TVT values for each disk diameter. The results from using different post-analysis methods, different random selection strategies and different digital systems were compared. Additional variability of the 0.1 mm disk diameter was explored by comparing the results from two different image data sets acquired under the same conditions from the same system. The magnitude and the type of error estimated for experimental data was explained through modeling. The modeled results also suggest a limitation in the current phantom design for the 0.1 mm diameter disks. Through modeling, it was also found that, because of the binomial statistic nature of the CDMAM test, the true variability of the test could be underestimated by the commonly used method of random re-sampling.

  3. Geometric correction of synchronous scanned Operational Modular Imaging Spectrometer II hyperspectral remote sensing images using spatial positioning data of an inertial navigation system

    NASA Astrophysics Data System (ADS)

    Zhou, Xiaohu; Neubauer, Franz; Zhao, Dong; Xu, Shichao

    2015-01-01

    The high-precision geometric correction of airborne hyperspectral remote sensing image processing was a hard nut to crack, and conventional methods of remote sensing image processing by selecting ground control points to correct the images are not suitable in the correction process of airborne hyperspectral image. The optical scanning system of an inertial measurement unit combined with differential global positioning system (IMU/DGPS) is introduced to correct the synchronous scanned Operational Modular Imaging Spectrometer II (OMIS II) hyperspectral remote sensing images. Posture parameters, which were synchronized with the OMIS II, were first obtained from the IMU/DGPS. Second, coordinate conversion and flight attitude parameters' calculations were conducted. Third, according to the imaging principle of OMIS II, mathematical correction was applied and the corrected image pixels were resampled. Then, better image processing results were achieved.

  4. High-throughput automated home-cage mesoscopic functional imaging of mouse cortex

    PubMed Central

    Murphy, Timothy H.; Boyd, Jamie D.; Bolaños, Federico; Vanni, Matthieu P.; Silasi, Gergely; Haupt, Dirk; LeDue, Jeff M.

    2016-01-01

    Mouse head-fixed behaviour coupled with functional imaging has become a powerful technique in rodent systems neuroscience. However, training mice can be time consuming and is potentially stressful for animals. Here we report a fully automated, open source, self-initiated head-fixation system for mesoscopic functional imaging in mice. The system supports five mice at a time and requires minimal investigator intervention. Using genetically encoded calcium indicator transgenic mice, we longitudinally monitor cortical functional connectivity up to 24 h per day in >7,000 self-initiated and unsupervised imaging sessions up to 90 days. The procedure provides robust assessment of functional cortical maps on the basis of both spontaneous activity and brief sensory stimuli such as light flashes. The approach is scalable to a number of remotely controlled cages that can be assessed within the controlled conditions of dedicated animal facilities. We anticipate that home-cage brain imaging will permit flexible and chronic assessment of mesoscale cortical function. PMID:27291514

  5. Amplitude modulation of alpha-band rhythm caused by mimic collision: MEG study.

    PubMed

    Yokosawa, Koichi; Watanabe, Tatsuya; Kikuzawa, Daichi; Aoyama, Gakuto; Takahashi, Makoto; Kuriki, Shinya

    2013-01-01

    Detection of a collision risk and avoiding the collision are important for survival. We have been investigating neural responses when humans anticipate a collision or intend to take evasive action by applying collision-simulating images in a predictable manner. Collision-simulating images and control images were presented in random order to 9 healthy male volunteers. A cue signal was also given visually two seconds before each stimulus to enable each participant to anticipate the upcoming stimulus. Magnetoencephalograms (MEG) were recorded with a 76-ch helmet system. The amplitude of alpha band (8-13 Hz) rhythm when anticipating the upcoming collision-simulating image was significantly smaller than that when anticipating control images even just after the cue signal. This result demonstrates that anticipating a negative (dangerous) event induced event-related desynchronization (ERD) of alpha band activity, probably caused by attention. The results suggest the feasibility of detecting endogenous brain activities by monitoring alpha band rhythm and its possible applications to engineering systems, such as an automatic collision evasion system for automobiles.

  6. Ultrasonic scanning system for imaging flaw growth in composites

    NASA Technical Reports Server (NTRS)

    Kiraly, L. J.; Meyn, E. H.

    1982-01-01

    A system for measuring and visually representing damage in composite specimens while they are being loaded was demonstrated. It uses a hobbiest grade microcomputer system to control data taking and image processing. The system scans operator selected regions of the specimen while it is under load in a tensile test machine and measures internal damage by the attenuation of a 2.5 MHz ultrasonic beam passed through the specimen. The microcomputer dynamically controls the position of ultrasonic transducers mounted on a two axis motor driven carriage. As many as 65,536 samples can be taken and filed on a floppy disk system in less than four minutes.

  7. Segmentation of financial seals and its implementation on a DSP-based system

    NASA Astrophysics Data System (ADS)

    He, Jin; Liu, Tiegen; Guo, Jingjing; Zhang, Hao

    2009-11-01

    Automatic seal imprint identification is an important part of modern financial security. Accurate segmentation is the basis of correct identification. In this paper, a DSP (digital signal processor) based identification system was designed, and an adaptive algorithm was proposed to extract binary seal images from financial instruments. As the kernel of the identification system, a DSP chip of TMS320DM642 was used to implement image processing, controlling and coordinating works of each system module. The proposed algorithm consisted of three stages, including extraction of grayscale seal image, denoising and binarization. A grayscale seal image was extracted by color transform from a financial instrument image. Adaptive morphological operations were used to highlight details of the extracted grayscale seal image and smooth the background. After median filter for noise elimination, the filtered seal image was binarized by Otsu's method. The algorithm was developed based on the DSP development environment CCS and real-time operation system DSP/BIOS. To simplify the implementation of the proposed algorithm, the calibration of white balance and the coarse positioning of the seal imprint were implemented by TMS320DM642 controlling image acquisition. IMGLIB of TMS320DM642 was used for the efficiency improvement. The experiment result showed that financial seal imprints, even with intricate and dense strokes can be correctly segmented by the proposed algorithm. Adhesion and incompleteness distortions in the segmentation results were reduced, even when the original seal imprint had a poor quality.

  8. A cost effective and high fidelity fluoroscopy simulator using the Image-Guided Surgery Toolkit (IGSTK)

    NASA Astrophysics Data System (ADS)

    Gong, Ren Hui; Jenkins, Brad; Sze, Raymond W.; Yaniv, Ziv

    2014-03-01

    The skills required for obtaining informative x-ray fluoroscopy images are currently acquired while trainees provide clinical care. As a consequence, trainees and patients are exposed to higher doses of radiation. Use of simulation has the potential to reduce this radiation exposure by enabling trainees to improve their skills in a safe environment prior to treating patients. We describe a low cost, high fidelity, fluoroscopy simulation system. Our system enables operators to practice their skills using the clinical device and simulated x-rays of a virtual patient. The patient is represented using a set of temporal Computed Tomography (CT) images, corresponding to the underlying dynamic processes. Simulated x-ray images, digitally reconstructed radiographs (DRRs), are generated from the CTs using ray-casting with customizable machine specific imaging parameters. To establish the spatial relationship between the CT and the fluoroscopy device, the CT is virtually attached to a patient phantom and a web camera is used to track the phantom's pose. The camera is mounted on the fluoroscope's intensifier and the relationship between it and the x-ray source is obtained via calibration. To control image acquisition the operator moves the fluoroscope as in normal operation mode. Control of zoom, collimation and image save is done using a keypad mounted alongside the device's control panel. Implementation is based on the Image-Guided Surgery Toolkit (IGSTK), and the use of the graphics processing unit (GPU) for accelerated image generation. Our system was evaluated by 11 clinicians and was found to be sufficiently realistic for training purposes.

  9. A near-infrared fluorescence-based surgical navigation system imaging software for sentinel lymph node detection

    NASA Astrophysics Data System (ADS)

    Ye, Jinzuo; Chi, Chongwei; Zhang, Shuang; Ma, Xibo; Tian, Jie

    2014-02-01

    Sentinel lymph node (SLN) in vivo detection is vital in breast cancer surgery. A new near-infrared fluorescence-based surgical navigation system (SNS) imaging software, which has been developed by our research group, is presented for SLN detection surgery in this paper. The software is based on the fluorescence-based surgical navigation hardware system (SNHS) which has been developed in our lab, and is designed specifically for intraoperative imaging and postoperative data analysis. The surgical navigation imaging software consists of the following software modules, which mainly include the control module, the image grabbing module, the real-time display module, the data saving module and the image processing module. And some algorithms have been designed to achieve the performance of the software, for example, the image registration algorithm based on correlation matching. Some of the key features of the software include: setting the control parameters of the SNS; acquiring, display and storing the intraoperative imaging data in real-time automatically; analysis and processing of the saved image data. The developed software has been used to successfully detect the SLNs in 21 cases of breast cancer patients. In the near future, we plan to improve the software performance and it will be extensively used for clinical purpose.

  10. The Earth Observing System AM Spacecraft - Thermal Control Subsystem

    NASA Technical Reports Server (NTRS)

    Chalmers, D.; Fredley, J.; Scott, C.

    1993-01-01

    Mission requirements for the EOS-AM Spacecraft intended to monitor global changes of the entire earth system are considered. The spacecraft is based on an instrument set containing the Advanced Spaceborne Thermal Emission and Reflection radiometer (ASTER), Clouds and Earth's Radiant Energy System (CERES), Multiangle Imaging Spectro-Radiometer (MISR), Moderate-Resolution Imaging Spectrometer (MODIS), and Measurements of Pollution in the Troposphere (MOPITT). Emphasis is placed on the design, analysis, development, and verification plans for the unique EOS-AM Thermal Control Subsystem (TCS) aimed at providing the required environments for all the onboard equipment in a densely packed layout. The TCS design maximizes the use of proven thermal design techniques and materials, in conjunction with a capillary pumped two-phase heat transport system for instrument thermal control.

  11. Advancement of Optical Component Control for an Imaging Fabry-Perot Interferometer

    NASA Technical Reports Server (NTRS)

    Larar, Allen M.; Cook, William B.; Flood, Michael A.; Campbell, Joel F.; Boyer, Charles M.

    2009-01-01

    Risk mitigation activities associated with a prototype imaging Fabry-Perot Interferometer (FPI) system are continuing at the NASA Langley Research Center. The system concept and technology center about enabling and improving future space-based atmospheric composition missions, with a current focus on observing tropospheric ozone around 9.6 micron, while having applicability toward measurement in different spectral regions and other applications. Recent activities have focused on improving an optical element control subsystem to enable precise and accurate positioning and control of etalon plates; this is needed to provide high system spectral fidelity critical for enabling the required ability to spectrally-resolve atmospheric line structure. The latest results pertaining to methodology enhancements, system implementation, and laboratory characterization testing will be reported

  12. [Research on Spectral Polarization Imaging System Based on Static Modulation].

    PubMed

    Zhao, Hai-bo; Li, Huan; Lin, Xu-ling; Wang, Zheng

    2015-04-01

    The main disadvantages of traditional spectral polarization imaging system are: complex structure, with moving parts, low throughput. A novel method of spectral polarization imaging system is discussed, which is based on static polarization intensity modulation combined with Savart polariscope interference imaging. The imaging system can obtain real-time information of spectral and four Stokes polarization messages. Compared with the conventional methods, the advantages of the imaging system are compactness, low mass and no moving parts, no electrical control, no slit and big throughput. The system structure and the basic theory are introduced. The experimental system is established in the laboratory. The experimental system consists of reimaging optics, polarization intensity module, interference imaging module, and CCD data collecting and processing module. The spectral range is visible and near-infrared (480-950 nm). The white board and the plane toy are imaged by using the experimental system. The ability of obtaining spectral polarization imaging information is verified. The calibration system of static polarization modulation is set up. The statistical error of polarization degree detection is less than 5%. The validity and feasibility of the basic principle is proved by the experimental result. The spectral polarization data captured by the system can be applied to object identification, object classification and remote sensing detection.

  13. Focused ultrasound thermal therapy system with ultrasound image guidance and temperature measurement feedback.

    PubMed

    Lin, Kao-Han; Young, Sun-Yi; Hsu, Ming-Chuan; Chan, Hsu; Chen, Yung-Yaw; Lin, Win-Li

    2008-01-01

    In this study, we developed a focused ultrasound (FUS) thermal therapy system with ultrasound image guidance and thermocouple temperature measurement feedback. Hydraulic position devices and computer-controlled servo motors were used to move the FUS transducer to the desired location with the measurement of actual movement by linear scale. The entire system integrated automatic position devices, FUS transducer, power amplifier, ultrasound image system, and thermocouple temperature measurement into a graphical user interface. For the treatment procedure, a thermocouple was implanted into a targeted treatment region in a tissue-mimicking phantom under ultrasound image guidance, and then the acoustic interference pattern formed by image ultrasound beam and low-power FUS beam was employed as image guidance to move the FUS transducer to have its focal zone coincident with the thermocouple tip. The thermocouple temperature rise was used to determine the sonication duration for a suitable thermal lesion as a high power was turned on and ultrasound image was used to capture the thermal lesion formation. For a multiple lesion formation, the FUS transducer was moved under the acoustic interference guidance to a new location and then it sonicated with the same power level and duration. This system was evaluated and the results showed that it could perform two-dimensional motion control to do a two-dimensional thermal therapy with a small localization error 0.5 mm. Through the user interface, the FUS transducer could be moved to heat the target region with the guidance of ultrasound image and acoustic interference pattern. The preliminary phantom experimental results demonstrated that the system could achieve the desired treatment plan satisfactorily.

  14. 2D microwave imaging reflectometer electronics.

    PubMed

    Spear, A G; Domier, C W; Hu, X; Muscatello, C M; Ren, X; Tobias, B J; Luhmann, N C

    2014-11-01

    A 2D microwave imaging reflectometer system has been developed to visualize electron density fluctuations on the DIII-D tokamak. Simultaneously illuminated at four probe frequencies, large aperture optics image reflections from four density-dependent cutoff surfaces in the plasma over an extended region of the DIII-D plasma. Localized density fluctuations in the vicinity of the plasma cutoff surfaces modulate the plasma reflections, yielding a 2D image of electron density fluctuations. Details are presented of the receiver down conversion electronics that generate the in-phase (I) and quadrature (Q) reflectometer signals from which 2D density fluctuation data are obtained. Also presented are details on the control system and backplane used to manage the electronics as well as an introduction to the computer based control program.

  15. Image segmentation for enhancing symbol recognition in prosthetic vision.

    PubMed

    Horne, Lachlan; Barnes, Nick; McCarthy, Chris; He, Xuming

    2012-01-01

    Current and near-term implantable prosthetic vision systems offer the potential to restore some visual function, but suffer from poor resolution and dynamic range of induced phosphenes. This can make it difficult for users of prosthetic vision systems to identify symbolic information (such as signs) except in controlled conditions. Using image segmentation techniques from computer vision, we show it is possible to improve the clarity of such symbolic information for users of prosthetic vision implants in uncontrolled conditions. We use image segmentation to automatically divide a natural image into regions, and using a fixation point controlled by the user, select a region to phosphenize. This technique improves the apparent contrast and clarity of symbolic information over traditional phosphenization approaches.

  16. Simulation of digital mammography images

    NASA Astrophysics Data System (ADS)

    Workman, Adam

    2005-04-01

    A number of different technologies are available for digital mammography. However, it is not clear how differences in the physical performance aspects of the different imaging technologies affect clinical performance. Randomised controlled trials provide a means of gaining information on clinical performance however do not provide direct comparison of the different digital imaging technologies. This work describes a method of simulating the performance of different digital mammography systems. The method involves modifying the imaging performance parameters of images from a small field of view (SFDM), high resolution digital imaging system used for spot imaging. Under normal operating conditions this system produces images with higher signal-to-noise ratio (SNR) over a wide spatial frequency range than current full field digital mammography (FFDM) systems. The SFDM images can be 'degraded" by computer processing to simulate the characteristics of a FFDM system. Initial work characterised the physical performance (MTF, NPS) of the SFDM detector and developed a model and method for simulating signal transfer and noise properties of a FFDM system. It was found that the SNR properties of the simulated FFDM images were very similar to those measured from an actual FFDM system verifying the methodology used. The application of this technique to clinical images from the small field system will allow the clinical performance of different FFDM systems to be simulated and directly compared using the same clinical image datasets.

  17. Spacecraft design project: High temperature superconducting infrared imaging satellite

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The High Temperature Superconductor Infrared Imaging Satellite (HTSCIRIS) is designed to perform the space based infrared imaging and surveillance mission. The design of the satellite follows the black box approach. The payload is a stand alone unit, with the spacecraft bus designed to meet the requirements of the payload as listed in the statement of work. Specifications influencing the design of the spacecraft bus were originated by the Naval Research Lab. A description of the following systems is included: spacecraft configuration, orbital dynamics, radio frequency communication subsystem, electrical power system, propulsion, attitude control system, thermal control, and structural design. The issues of testing and cost analysis are also addressed. This design project was part of the course Advanced Spacecraft Design taught at the Naval Postgraduate School.

  18. System and method for controlling depth of imaging in tissues using fluorescence microscopy under ultraviolet excitation following staining with fluorescing agents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demos, Stavros; Levenson, Richard

    The present disclosure relates to a method for analyzing tissue specimens. In one implementation the method involves obtaining a tissue sample and exposing the sample to one or more fluorophores as contrast agents to enhance contrast of subcellular compartments of the tissue sample. The tissue sample is illuminated by an ultraviolet (UV) light having a wavelength between about 200 nm to about 400 nm, with the wavelength being selected to result in penetration to only a specified depth below a surface of the tissue sample. Inter-image operations between images acquired under different imaging parameters allow for improvement of the imagemore » quality via removal of unwanted image components. A microscope may be used to image the tissue sample and provide the image to an image acquisition system that makes use of a camera. The image acquisition system may create a corresponding image that is transmitted to a display system for processing and display.« less

  19. Design of intelligent vehicle control system based on single chip microcomputer

    NASA Astrophysics Data System (ADS)

    Zhang, Congwei

    2018-06-01

    The smart car microprocessor uses the KL25ZV128VLK4 in the Freescale series of single-chip microcomputers. The image sampling sensor uses the CMOS digital camera OV7725. The obtained track data is processed by the corresponding algorithm to obtain track sideline information. At the same time, the pulse width modulation control (PWM) is used to control the motor and servo movements, and based on the digital incremental PID algorithm, the motor speed control and servo steering control are realized. In the project design, IAR Embedded Workbench IDE is used as the software development platform to program and debug the micro-control module, camera image processing module, hardware power distribution module, motor drive and servo control module, and then complete the design of the intelligent car control system.

  20. Design of embedded endoscopic ultrasonic imaging system

    NASA Astrophysics Data System (ADS)

    Li, Ming; Zhou, Hao; Wen, Shijie; Chen, Xiodong; Yu, Daoyin

    2008-12-01

    Endoscopic ultrasonic imaging system is an important component in the endoscopic ultrasonography system (EUS). Through the ultrasonic probe, the characteristics of the fault histology features of digestive organs is detected by EUS, and then received by the reception circuit which making up of amplifying, gain compensation, filtering and A/D converter circuit, in the form of ultrasonic echo. Endoscopic ultrasonic imaging system is the back-end processing system of the EUS, with the function of receiving digital ultrasonic echo modulated by the digestive tract wall from the reception circuit, acquiring and showing the fault histology features in the form of image and characteristic data after digital signal processing, such as demodulation, etc. Traditional endoscopic ultrasonic imaging systems are mainly based on image acquisition and processing chips, which connecting to personal computer with USB2.0 circuit, with the faults of expensive, complicated structure, poor portability, and difficult to popularize. To against the shortcomings above, this paper presents the methods of digital signal acquisition and processing specially based on embedded technology with the core hardware structure of ARM and FPGA for substituting the traditional design with USB2.0 and personal computer. With built-in FIFO and dual-buffer, FPGA implement the ping-pong operation of data storage, simultaneously transferring the image data into ARM through the EBI bus by DMA function, which is controlled by ARM to carry out the purpose of high-speed transmission. The ARM system is being chosen to implement the responsibility of image display every time DMA transmission over and actualizing system control with the drivers and applications running on the embedded operating system Windows CE, which could provide a stable, safe and reliable running platform for the embedded device software. Profiting from the excellent graphical user interface (GUI) and good performance of Windows CE, we can not only clearly show 511×511 pixels ultrasonic echo images through application program, but also provide a simple and friendly operating interface with mouse and touch screen which is more convenient than the traditional endoscopic ultrasonic imaging system. Including core and peripheral circuits of FPGA and ARM, power network circuit and LCD display circuit, we designed the whole embedded system, achieving the desired purpose by implementing ultrasonic image display properly after the experimental verification, solving the problem of hugeness and complexity of the traditional endoscopic ultrasonic imaging system.

  1. Design and evaluation of a computed tomography (CT)-compatible needle insertion device using an electromagnetic tracking system and CT images.

    PubMed

    Shahriari, Navid; Hekman, Edsko; Oudkerk, Matthijs; Misra, Sarthak

    2015-11-01

    Percutaneous needle insertion procedures are commonly used for diagnostic and therapeutic purposes. Although current technology allows accurate localization of lesions, they cannot yet be precisely targeted. Lung cancer is the most common cause of cancer-related death, and early detection reduces the mortality rate. Therefore, suspicious lesions are tested for diagnosis by performing needle biopsy. In this paper, we have presented a novel computed tomography (CT)-compatible needle insertion device (NID). The NID is used to steer a flexible needle (φ0.55 mm) with a bevel at the tip in biological tissue. CT images and an electromagnetic (EM) tracking system are used in two separate scenarios to track the needle tip in three-dimensional space during the procedure. Our system uses a control algorithm to steer the needle through a combination of insertion and minimal number of rotations. Noise analysis of CT images has demonstrated the compatibility of the device. The results for three experimental cases (case 1: open-loop control, case 2: closed-loop control using EM tracking system and case 3: closed-loop control using CT images) are presented. Each experimental case is performed five times, and average targeting errors are 2.86 ± 1.14, 1.11 ± 0.14 and 1.94 ± 0.63 mm for case 1, case 2 and case 3, respectively. The achieved results show that our device is CT-compatible and it is able to steer a bevel-tipped needle toward a target. We are able to use intermittent CT images and EM tracking data to control the needle path in a closed-loop manner. These results are promising and suggest that it is possible to accurately target the lesions in real clinical procedures in the future.

  2. [A new concept for integration of image databanks into a comprehensive patient documentation].

    PubMed

    Schöll, E; Holm, J; Eggli, S

    2001-05-01

    Image processing and archiving are of increasing importance in the practice of modern medicine. Particularly due to the introduction of computer-based investigation methods, physicians are dealing with a wide variety of analogue and digital picture archives. On the other hand, clinical information is stored in various text-based information systems without integration of image components. The link between such traditional medical databases and picture archives is a prerequisite for efficient data management as well as for continuous quality control and medical education. At the Department of Orthopedic Surgery, University of Berne, a software program was developed to create a complete multimedia electronic patient record. The client-server system contains all patients' data, questionnaire-based quality control, and a digital picture archive. Different interfaces guarantee the integration into the hospital's data network. This article describes our experiences in the development and introduction of a comprehensive image archiving system at a large orthopedic center.

  3. Digitally controlled analog proportional-integral-derivative (PID) controller for high-speed scanning probe microscopy

    NASA Astrophysics Data System (ADS)

    Dukic, Maja; Todorov, Vencislav; Andany, Santiago; Nievergelt, Adrian P.; Yang, Chen; Hosseini, Nahid; Fantner, Georg E.

    2017-12-01

    Nearly all scanning probe microscopes (SPMs) contain a feedback controller, which is used to move the scanner in the direction of the z-axis in order to maintain a constant setpoint based on the tip-sample interaction. The most frequently used feedback controller in SPMs is the proportional-integral (PI) controller. The bandwidth of the PI controller presents one of the speed limiting factors in high-speed SPMs, where higher bandwidths enable faster scanning speeds and higher imaging resolution. Most SPM systems use digital signal processor-based PI feedback controllers, which require analog-to-digital and digital-to-analog converters. These converters introduce additional feedback delays which limit the achievable imaging speed and resolution. In this paper, we present a digitally controlled analog proportional-integral-derivative (PID) controller. The controller implementation allows tunability of the PID gains over a large amplification and frequency range, while also providing precise control of the system and reproducibility of the gain parameters. By using the analog PID controller, we were able to perform successful atomic force microscopy imaging of a standard silicon calibration grating at line rates up to several kHz.

  4. Digitally controlled analog proportional-integral-derivative (PID) controller for high-speed scanning probe microscopy.

    PubMed

    Dukic, Maja; Todorov, Vencislav; Andany, Santiago; Nievergelt, Adrian P; Yang, Chen; Hosseini, Nahid; Fantner, Georg E

    2017-12-01

    Nearly all scanning probe microscopes (SPMs) contain a feedback controller, which is used to move the scanner in the direction of the z-axis in order to maintain a constant setpoint based on the tip-sample interaction. The most frequently used feedback controller in SPMs is the proportional-integral (PI) controller. The bandwidth of the PI controller presents one of the speed limiting factors in high-speed SPMs, where higher bandwidths enable faster scanning speeds and higher imaging resolution. Most SPM systems use digital signal processor-based PI feedback controllers, which require analog-to-digital and digital-to-analog converters. These converters introduce additional feedback delays which limit the achievable imaging speed and resolution. In this paper, we present a digitally controlled analog proportional-integral-derivative (PID) controller. The controller implementation allows tunability of the PID gains over a large amplification and frequency range, while also providing precise control of the system and reproducibility of the gain parameters. By using the analog PID controller, we were able to perform successful atomic force microscopy imaging of a standard silicon calibration grating at line rates up to several kHz.

  5. Development of a Digital Control for the Phase Contrast Imaging Alignment Feedback System

    NASA Astrophysics Data System (ADS)

    Hirata, M.; Marinoni, A.; Rost, J. C.; Davis, E. M.; Porkolab, M.

    2016-10-01

    The Phase Contrast Imaging diagnostic is an internal reference interferometer that images density fluctuations on a 32-element linear detector array. Since proper operation of the system requires accurate alignment of a CO2 laser beam on a phase plate, beam motion due to vibrations of the DIII-D vessel need to be compensated up to 1 kHz. The feedback network controlling the steering mirrors currently uses a linear analog controller, but a digital controller can provide improved stability performance and flexibility. A prototype was developed using an Arduino Due, a low-cost microcontroller, to assess performance capabilities. Digital control parameters will be developed based on the measured frequency and phase response of the physical components. Finally, testing of the digital feedback system and the required revisions will be done to achieve successful performance. This upgrade to the linear analog controller is expected to be used routinely on similar diagnostics in fusion devices, especially in view of restricted access to the machine hall. Work supported in part by the US Department of Energy under DE-FG02-94ER54235, DE-FC02-04ER54698, and the Science Undergraduate Laboratory Internships Program (SULI).

  6. Fuzzy control system for a remote focusing microscope

    NASA Astrophysics Data System (ADS)

    Weiss, Jonathan J.; Tran, Luc P.

    1992-01-01

    Space Station Crew Health Care System procedures require the use of an on-board microscope whose slide images will be transmitted for analysis by ground-based microbiologists. Focusing of microscope slides is low on the list of crew priorities, so NASA is investigating the option of telerobotic focusing controlled by the microbiologist on the ground, using continuous video feedback. However, even at Space Station distances, the transmission time lag may disrupt the focusing process, severely limiting the number of slides that can be analyzed within a given bandwidth allocation. Substantial time could be saved if on-board automation could pre-focus each slide before transmission. The authors demonstrate the feasibility of on-board automatic focusing using a fuzzy logic ruled-based system to bring the slide image into focus. The original prototype system was produced in under two months and at low cost. Slide images are captured by a video camera, then digitized by gray-scale value. A software function calculates an index of 'sharpness' based on gray-scale contrasts. The fuzzy logic rule-based system uses feedback to set the microscope's focusing control in an attempt to maximize sharpness. The systems as currently implemented performs satisfactorily in focusing a variety of slide types at magnification levels ranging from 10 to 1000x. Although feasibility has been demonstrated, the system's performance and usability could be improved substantially in four ways: by upgrading the quality and resolution of the video imaging system (including the use of full color); by empirically defining and calibrating the index of image sharpness; by letting the overall focusing strategy vary depending on user-specified parameters; and by fine-tuning the fuzzy rules, set definitions, and procedures used.

  7. Satellite image collection optimization

    NASA Astrophysics Data System (ADS)

    Martin, William

    2002-09-01

    Imaging satellite systems represent a high capital cost. Optimizing the collection of images is critical for both satisfying customer orders and building a sustainable satellite operations business. We describe the functions of an operational, multivariable, time dynamic optimization system that maximizes the daily collection of satellite images. A graphical user interface allows the operator to quickly see the results of what if adjustments to an image collection plan. Used for both long range planning and daily collection scheduling of Space Imaging's IKONOS satellite, the satellite control and tasking (SCT) software allows collection commands to be altered up to 10 min before upload to the satellite.

  8. AAPM/RSNA physics tutorial for residents: physics of flat-panel fluoroscopy systems: Survey of modern fluoroscopy imaging: flat-panel detectors versus image intensifiers and more.

    PubMed

    Nickoloff, Edward Lee

    2011-01-01

    This article reviews the design and operation of both flat-panel detector (FPD) and image intensifier fluoroscopy systems. The different components of each imaging chain and their functions are explained and compared. FPD systems have multiple advantages such as a smaller size, extended dynamic range, no spatial distortion, and greater stability. However, FPD systems typically have the same spatial resolution for all fields of view (FOVs) and are prone to ghosting. Image intensifier systems have better spatial resolution with the use of smaller FOVs (magnification modes) and tend to be less expensive. However, the spatial resolution of image intensifier systems is limited by the television system to which they are coupled. Moreover, image intensifier systems are degraded by glare, vignetting, spatial distortions, and defocusing effects. FPD systems do not have these problems. Some recent innovations to fluoroscopy systems include automated filtration, pulsed fluoroscopy, automatic positioning, dose-area product meters, and improved automatic dose rate control programs. Operator-selectable features may affect both the patient radiation dose and image quality; these selectable features include dose level setting, the FOV employed, fluoroscopic pulse rates, geometric factors, display software settings, and methods to reduce the imaging time. © RSNA, 2011.

  9. A 3D Freehand Ultrasound System for Multi-view Reconstructions from Sparse 2D Scanning Planes

    PubMed Central

    2011-01-01

    Background A significant limitation of existing 3D ultrasound systems comes from the fact that the majority of them work with fixed acquisition geometries. As a result, the users have very limited control over the geometry of the 2D scanning planes. Methods We present a low-cost and flexible ultrasound imaging system that integrates several image processing components to allow for 3D reconstructions from limited numbers of 2D image planes and multiple acoustic views. Our approach is based on a 3D freehand ultrasound system that allows users to control the 2D acquisition imaging using conventional 2D probes. For reliable performance, we develop new methods for image segmentation and robust multi-view registration. We first present a new hybrid geometric level-set approach that provides reliable segmentation performance with relatively simple initializations and minimum edge leakage. Optimization of the segmentation model parameters and its effect on performance is carefully discussed. Second, using the segmented images, a new coarse to fine automatic multi-view registration method is introduced. The approach uses a 3D Hotelling transform to initialize an optimization search. Then, the fine scale feature-based registration is performed using a robust, non-linear least squares algorithm. The robustness of the multi-view registration system allows for accurate 3D reconstructions from sparse 2D image planes. Results Volume measurements from multi-view 3D reconstructions are found to be consistently and significantly more accurate than measurements from single view reconstructions. The volume error of multi-view reconstruction is measured to be less than 5% of the true volume. We show that volume reconstruction accuracy is a function of the total number of 2D image planes and the number of views for calibrated phantom. In clinical in-vivo cardiac experiments, we show that volume estimates of the left ventricle from multi-view reconstructions are found to be in better agreement with clinical measures than measures from single view reconstructions. Conclusions Multi-view 3D reconstruction from sparse 2D freehand B-mode images leads to more accurate volume quantification compared to single view systems. The flexibility and low-cost of the proposed system allow for fine control of the image acquisition planes for optimal 3D reconstructions from multiple views. PMID:21251284

  10. A 3D freehand ultrasound system for multi-view reconstructions from sparse 2D scanning planes.

    PubMed

    Yu, Honggang; Pattichis, Marios S; Agurto, Carla; Beth Goens, M

    2011-01-20

    A significant limitation of existing 3D ultrasound systems comes from the fact that the majority of them work with fixed acquisition geometries. As a result, the users have very limited control over the geometry of the 2D scanning planes. We present a low-cost and flexible ultrasound imaging system that integrates several image processing components to allow for 3D reconstructions from limited numbers of 2D image planes and multiple acoustic views. Our approach is based on a 3D freehand ultrasound system that allows users to control the 2D acquisition imaging using conventional 2D probes.For reliable performance, we develop new methods for image segmentation and robust multi-view registration. We first present a new hybrid geometric level-set approach that provides reliable segmentation performance with relatively simple initializations and minimum edge leakage. Optimization of the segmentation model parameters and its effect on performance is carefully discussed. Second, using the segmented images, a new coarse to fine automatic multi-view registration method is introduced. The approach uses a 3D Hotelling transform to initialize an optimization search. Then, the fine scale feature-based registration is performed using a robust, non-linear least squares algorithm. The robustness of the multi-view registration system allows for accurate 3D reconstructions from sparse 2D image planes. Volume measurements from multi-view 3D reconstructions are found to be consistently and significantly more accurate than measurements from single view reconstructions. The volume error of multi-view reconstruction is measured to be less than 5% of the true volume. We show that volume reconstruction accuracy is a function of the total number of 2D image planes and the number of views for calibrated phantom. In clinical in-vivo cardiac experiments, we show that volume estimates of the left ventricle from multi-view reconstructions are found to be in better agreement with clinical measures than measures from single view reconstructions. Multi-view 3D reconstruction from sparse 2D freehand B-mode images leads to more accurate volume quantification compared to single view systems. The flexibility and low-cost of the proposed system allow for fine control of the image acquisition planes for optimal 3D reconstructions from multiple views.

  11. High contrast imaging through adaptive transmittance control in the focal plane

    NASA Astrophysics Data System (ADS)

    Dhadwal, Harbans S.; Rastegar, Jahangir; Feng, Dake

    2016-05-01

    High contrast imaging, in the presence of a bright background, is a challenging problem encountered in diverse applications ranging from the daily chore of driving into a sun-drenched scene to in vivo use of biomedical imaging in various types of keyhole surgeries. Imaging in the presence of bright sources saturates the vision system, resulting in loss of scene fidelity, corresponding to low image contrast and reduced resolution. The problem is exacerbated in retro-reflective imaging systems where the light sources illuminating the object are unavoidably strong, typically masking the object features. This manuscript presents a novel theoretical framework, based on nonlinear analysis and adaptive focal plane transmittance, to selectively remove object domain sources of background light from the image plane, resulting in local and global increases in image contrast. The background signal can either be of a global specular nature, giving rise to parallel illumination from the entire object surface or can be represented by a mosaic of randomly orientated, small specular surfaces. The latter is more representative of real world practical imaging systems. Thus, the background signal comprises of groups of oblique rays corresponding to distributions of the mosaic surfaces. Through the imaging system, light from group of like surfaces, converges to a localized spot in the focal plane of the lens and then diverges to cast a localized bright spot in the image plane. Thus, transmittance of a spatial light modulator, positioned in the focal plane, can be adaptively controlled to block a particular source of background light. Consequently, the image plane intensity is entirely due to the object features. Experimental image data is presented to verify the efficacy of the methodology.

  12. Cybernetic Basis and System Practice of Remote Sensing and Spatial Information Science

    NASA Astrophysics Data System (ADS)

    Tan, X.; Jing, X.; Chen, R.; Ming, Z.; He, L.; Sun, Y.; Sun, X.; Yan, L.

    2017-09-01

    Cybernetics provides a new set of ideas and methods for the study of modern science, and it has been fully applied in many areas. However, few people have introduced cybernetics into the field of remote sensing. The paper is based on the imaging process of remote sensing system, introducing cybernetics into the field of remote sensing, establishing a space-time closed-loop control theory for the actual operation of remote sensing. The paper made the process of spatial information coherently, and improved the comprehensive efficiency of the space information from acquisition, procession, transformation to application. We not only describes the application of cybernetics in remote sensing platform control, sensor control, data processing control, but also in whole system of remote sensing imaging process control. We achieve the information of output back to the input to control the efficient operation of the entire system. This breakthrough combination of cybernetics science and remote sensing science will improve remote sensing science to a higher level.

  13. A Prototype Instrument for Adaptive SPECT Imaging

    PubMed Central

    Freed, Melanie; Kupinski, Matthew A.; Furenlid, Lars R.; Barrett, Harrison H.

    2015-01-01

    We have designed and constructed a small-animal adaptive SPECT imaging system as a prototype for quantifying the potential benefit of adaptive SPECT imaging over the traditional fixed geometry approach. The optical design of the system is based on filling the detector with the object for each viewing angle, maximizing the sensitivity, and optimizing the resolution in the projection images. Additional feedback rules for determining the optimal geometry of the system can be easily added to the existing control software. Preliminary data have been taken of a phantom with a small, hot, offset lesion in a flat background in both adaptive and fixed geometry modes. Comparison of the predicted system behavior with the actual system behavior is presented along with recommendations for system improvements. PMID:26346820

  14. Identification and control of a multizone crystal growth furnace

    NASA Technical Reports Server (NTRS)

    Batur, C.; Sharpless, R. B.; Duval, W. M. B.; Rosenthal, B. N.; Singh, N. B.

    1992-01-01

    This paper presents an intelligent adaptive control system for the control of a solid-liquid interface of a crystal while it is growing via directional solidification inside a multizone transparent furnace. The task of the process controller is to establish a user-specified axial temperature profile and to maintain a desirable interface shape. Both single-input-single-output and multi-input-multi-output adaptive pole placement algorithms have been used to control the temperature. Also described is an intelligent measurement system to assess the shape of the crystal while it is growing. A color video imaging system observes the crystal in real time and determines the position and the shape of the interface. This information is used to evaluate the crystal growth rate, and to analyze the effects of translational velocity and temperature profiles on the shape of the interface. Creation of this knowledge base is the first step to incorporate image processing into furnace control.

  15. Keck adaptive optics: control subsystem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brase, J.M.; An, J.; Avicola, K.

    1996-03-08

    Adaptive optics on the Keck 10 meter telescope will provide an unprecedented level of capability in high resolution ground based astronomical imaging. The system is designed to provide near diffraction limited imaging performance with Strehl {gt} 0.3 n median Keck seeing of r0 = 25 cm, T =10 msec at 500 nm wavelength. The system will be equipped with a 20 watt sodium laser guide star to provide nearly full sky coverage. The wavefront control subsystem is responsible for wavefront sensing and the control of the tip-tilt and deformable mirrors which actively correct atmospheric turbulence. The spatial sampling interval formore » the wavefront sensor and deformable mirror is de=0.56 m which gives us 349 actuators and 244 subapertures. This paper summarizes the wavefront control system and discusses particular issues in designing a wavefront controller for the Keck telescope.« less

  16. An Integrated System for Superharmonic Contrast-Enhanced Ultrasound Imaging: Design and Intravascular Phantom Imaging Study.

    PubMed

    Li, Yang; Ma, Jianguo; Martin, K Heath; Yu, Mingyue; Ma, Teng; Dayton, Paul A; Jiang, Xiaoning; Shung, K Kirk; Zhou, Qifa

    2016-09-01

    Superharmonic contrast-enhanced ultrasound imaging, also called acoustic angiography, has previously been used for the imaging of microvasculature. This approach excites microbubble contrast agents near their resonance frequency and receives echoes at nonoverlapping superharmonic bandwidths. No integrated system currently exists could fully support this application. To fulfill this need, an integrated dual-channel transmit/receive system for superharmonic imaging was designed, built, and characterized experimentally. The system was uniquely designed for superharmonic imaging and high-resolution B-mode imaging. A complete ultrasound system including a pulse generator, a data acquisition unit, and a signal processing unit were integrated into a single package. The system was controlled by a field-programmable gate array, on which multiple user-defined modes were implemented. A 6-, 35-MHz dual-frequency dual-element intravascular ultrasound transducer was designed and used for imaging. The system successfully obtained high-resolution B-mode images of coronary artery ex vivo with 45-dB dynamic range. The system was capable of acquiring in vitro superharmonic images of a vasa vasorum mimicking phantom with 30-dB contrast. It could detect a contrast agent filled tissue mimicking tube of 200 μm diameter. For the first time, high-resolution B-mode images and superharmonic images were obtained in an intravascular phantom, made possible by the dedicated integrated system proposed. The system greatly reduced the cost and complexity of the superharmonic imaging intended for preclinical study. Significant: The system showed promise for high-contrast intravascular microvascular imaging, which may have significant importance in assessment of the vasa vasorum associated with atherosclerotic plaques.

  17. Integration of Irma tactical scene generator into directed-energy weapon system simulation

    NASA Astrophysics Data System (ADS)

    Owens, Monte A.; Cole, Madison B., III; Laine, Mark R.

    2003-08-01

    Integrated high-fidelity physics-based simulations that include engagement models, image generation, electro-optical hardware models and control system algorithms have previously been developed by Boeing-SVS for various tracking and pointing systems. These simulations, however, had always used images with featureless or random backgrounds and simple target geometries. With the requirement to engage tactical ground targets in the presence of cluttered backgrounds, a new type of scene generation tool was required to fully evaluate system performance in this challenging environment. To answer this need, Irma was integrated into the existing suite of Boeing-SVS simulation tools, allowing scene generation capabilities with unprecedented realism. Irma is a US Air Force research tool used for high-resolution rendering and prediction of target and background signatures. The MATLAB/Simulink-based simulation achieves closed-loop tracking by running track algorithms on the Irma-generated images, processing the track errors through optical control algorithms, and moving simulated electro-optical elements. The geometry of these elements determines the sensor orientation with respect to the Irma database containing the three-dimensional background and target models. This orientation is dynamically passed to Irma through a Simulink S-function to generate the next image. This integrated simulation provides a test-bed for development and evaluation of tracking and control algorithms against representative images including complex background environments and realistic targets calibrated using field measurements.

  18. The Precision Formation Flying Integrated Analysis Tool (PFFIAT)

    NASA Technical Reports Server (NTRS)

    Stoneking, Eric; Lyon, Richard G.; Sears, Edie; Lu, Victor

    2004-01-01

    Several space missions presently in the concept phase (e.g. Stellar Imager, Submillimeter Probe of Evolutionary Cosmic Structure, Terrestrial Planet Finder) plan to use multiple spacecraft flying in precise formation to synthesize unprecedently large aperture optical systems. These architectures present challenges to the attitude and position determination and control system; optical performance is directly coupled to spacecraft pointing with typical control requirements being on the scale of milliarcseconds and nanometers. To investigate control strategies, rejection of environmental disturbances, and sensor and actuator requirements, a capability is needed to model both the dynamical and optical behavior of such a distributed telescope system. This paper describes work ongoing at NASA Goddard Space Flight Center toward the integration of a set of optical analysis tools (Optical System Characterization and Analysis Research software, or OSCAR) with the Formation Flying Test Bed (FFTB). The resulting system is called the Precision Formation Flying Integrated Analysis Tool (PFFIAT), and it provides the capability to simulate closed-loop control of optical systems composed of elements mounted on multiple spacecraft. The attitude and translation spacecraft dynamics are simulated in the FFTB, including effects of the space environment (e.g. solar radiation pressure, differential orbital motion). The resulting optical configuration is then processed by OSCAR to determine an optical image. From this image, wavefront sensing (e.g. phase retrieval) techniques are being developed to derive attitude and position errors. These error signals will be fed back to the spacecraft control systems, completing the control loop. A simple case study is presented to demonstrate the present capabilities of the tool.

  19. The Precision Formation Flying Integrated Analysis Tool (PFFIAT)

    NASA Technical Reports Server (NTRS)

    Stoneking, Eric; Lyon, Richard G.; Sears, Edie; Lu, Victor

    2004-01-01

    Several space missions presently in the concept phase (e.g. Stellar Imager, Sub- millimeter Probe of Evolutionary Cosmic Structure, Terrestrial Planet Finder) plan to use multiple spacecraft flying in precise formation to synthesize unprecedently large aperture optical systems. These architectures present challenges to the attitude and position determination and control system; optical performance is directly coupled to spacecraft pointing with typical control requirements being on the scale of milliarcseconds and nanometers. To investigate control strategies, rejection of environmental disturbances, and sensor and actuator requirements, a capability is needed to model both the dynamical and optical behavior of such a distributed telescope system. This paper describes work ongoing at NASA Goddard Space Flight Center toward the integration of a set of optical analysis tools (Optical System Characterization and Analysis Research software, or OSCAR) with the Formation J?lying Test Bed (FFTB). The resulting system is called the Precision Formation Flying Integrated Analysis Tool (PFFIAT), and it provides the capability to simulate closed-loop control of optical systems composed of elements mounted on multiple spacecraft. The attitude and translation spacecraft dynamics are simulated in the FFTB, including effects of the space environment (e.g. solar radiation pressure, differential orbital motion). The resulting optical configuration is then processed by OSCAR to determine an optical image. From this image, wavefront sensing (e.g. phase retrieval) techniques are being developed to derive attitude and position errors. These error signals will be fed back to the spacecraft control systems, completing the control loop. A simple case study is presented to demonstrate the present capabilities of the tool.

  20. Highly Protable Airborne Multispectral Imaging System

    NASA Technical Reports Server (NTRS)

    Lehnemann, Robert; Mcnamee, Todd

    2001-01-01

    A portable instrumentation system is described that includes and airborne and a ground-based subsytem. It can acquire multispectral image data over swaths of terrain ranging in width from about 1.5 to 1 km. The system was developed especially for use in coastal environments and is well suited for performing remote sensing and general environmental monitoring. It includes a small,munpilotaed, remotely controlled airplance that carries a forward-looking camera for navigation, three downward-looking monochrome video cameras for imaging terrain in three spectral bands, a video transmitter, and a Global Positioning System (GPS) reciever.

  1. 21 CFR 892.1715 - Full-field digital mammography system.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... planar digital x-ray images of the entire breast. This generic type of device may include digital mammography acquisition software, full-field digital image receptor, acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and equipment supports, component...

  2. 21 CFR 892.1715 - Full-field digital mammography system.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... planar digital x-ray images of the entire breast. This generic type of device may include digital mammography acquisition software, full-field digital image receptor, acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and equipment supports, component...

  3. 21 CFR 892.1715 - Full-field digital mammography system.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... planar digital x-ray images of the entire breast. This generic type of device may include digital mammography acquisition software, full-field digital image receptor, acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and equipment supports, component...

  4. 21 CFR 892.1715 - Full-field digital mammography system.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... planar digital x-ray images of the entire breast. This generic type of device may include digital mammography acquisition software, full-field digital image receptor, acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and equipment supports, component...

  5. TL dosimetry for quality control of CR mammography imaging systems

    NASA Astrophysics Data System (ADS)

    Gaona, E.; Nieto, J. A.; Góngora, J. A. I. D.; Arreola, M.; Enríquez, J. G. F.

    The aim of this work is to estimate the average glandular dose with thermoluminescent (TL) dosimetry and comparison with quality imaging in computed radiography (CR) mammography. For a measuring dose, the Food and Drug Administration (FDA) and the American College of Radiology (ACR) use a phantom, so that dose and image quality are assessed with the same test object. The mammography is a radiological image to visualize early biological manifestations of breast cancer. Digital systems have two types of image-capturing devices, full field digital mammography (FFDM) and CR mammography. In Mexico, there are several CR mammography systems in clinical use, but only one system has been approved for use by the FDA. Mammography CR uses a photostimulable phosphor detector (PSP) system. Most CR plates are made of 85% BaFBr and 15% BaFI doped with europium (Eu) commonly called barium flourohalideE We carry out an exploratory survey of six CR mammography units from three different manufacturers and six dedicated X-ray mammography units with fully automatic exposure. The results show three CR mammography units (50%) have a dose greater than 3.0 mGy without demonstrating improved image quality. The differences between doses averages from TLD system and dosimeter with ionization chamber are less than 10%. TLD system is a good option for average glandular dose measurement for X-rays with a HVL (0.35-0.38 mmAl) and kVp (24-26) used in quality control procedures with ACR Mammography Accreditation Phantom.

  6. System and method for progressive band selection for hyperspectral images

    NASA Technical Reports Server (NTRS)

    Fisher, Kevin (Inventor)

    2013-01-01

    Disclosed herein are systems, methods, and non-transitory computer-readable storage media for progressive band selection for hyperspectral images. A system having module configured to control a processor to practice the method calculates a virtual dimensionality of a hyperspectral image having multiple bands to determine a quantity Q of how many bands are needed for a threshold level of information, ranks each band based on a statistical measure, selects Q bands from the multiple bands to generate a subset of bands based on the virtual dimensionality, and generates a reduced image based on the subset of bands. This approach can create reduced datasets of full hyperspectral images tailored for individual applications. The system uses a metric specific to a target application to rank the image bands, and then selects the most useful bands. The number of bands selected can be specified manually or calculated from the hyperspectral image's virtual dimensionality.

  7. Raster Scan Computer Image Generation (CIG) System Based On Refresh Memory

    NASA Astrophysics Data System (ADS)

    Dichter, W.; Doris, K.; Conkling, C.

    1982-06-01

    A full color, Computer Image Generation (CIG) raster visual system has been developed which provides a high level of training sophistication by utilizing advanced semiconductor technology and innovative hardware and firmware techniques. Double buffered refresh memory and efficient algorithms eliminate the problem of conventional raster line ordering by allowing the generated image to be stored in a random fashion. Modular design techniques and simplified architecture provide significant advantages in reduced system cost, standardization of parts, and high reliability. The major system components are a general purpose computer to perform interfacing and data base functions; a geometric processor to define the instantaneous scene image; a display generator to convert the image to a video signal; an illumination control unit which provides final image processing; and a CRT monitor for display of the completed image. Additional optional enhancements include texture generators, increased edge and occultation capability, curved surface shading, and data base extensions.

  8. Man-machine interactive imaging and data processing using high-speed digital mass storage

    NASA Technical Reports Server (NTRS)

    Alsberg, H.; Nathan, R.

    1975-01-01

    The role of vision in teleoperation has been recognized as an important element in the man-machine control loop. In most applications of remote manipulation, direct vision cannot be used. To overcome this handicap, the human operator's control capabilities are augmented by a television system. This medium provides a practical and useful link between workspace and the control station from which the operator perform his tasks. Human performance deteriorates when the images are degraded as a result of instrumental and transmission limitations. Image enhancement is used to bring out selected qualities in a picture to increase the perception of the observer. A general purpose digital computer, an extensive special purpose software system is used to perform an almost unlimited repertoire of processing operations.

  9. Acousto-optic laser projection systems for displaying TV information

    NASA Astrophysics Data System (ADS)

    Gulyaev, Yu V.; Kazaryan, M. A.; Mokrushin, Yu M.; Shakin, O. V.

    2015-04-01

    This review addresses various approaches to television projection imaging on large screens using lasers. Results are presented of theoretical and experimental studies of an acousto-optic projection system operating on the principle of projecting an image of an entire amplitude-modulated television line in a single laser pulse. We consider characteristic features of image formation in such a system and the requirements for its individual components. Particular attention is paid to nonlinear distortions of the image signal, which show up most severely at low modulation signal frequencies. We discuss the feasibility of improving the process efficiency and image quality using acousto-optic modulators and pulsed lasers. Real-time projectors with pulsed line imaging can be used for controlling high-intensity laser radiation.

  10. Spacecraft Alignment Determination and Control for Dual Spacecraft Precision Formation Flying

    NASA Technical Reports Server (NTRS)

    Calhoun, Philip; Novo-Gradac, Anne-Marie; Shah, Neerav

    2017-01-01

    Many proposed formation flying missions seek to advance the state of the art in spacecraft science imaging by utilizing precision dual spacecraft formation flying to enable a virtual space telescope. Using precision dual spacecraft alignment, very long focal lengths can be achieved by locating the optics on one spacecraft and the detector on the other. Proposed science missions include astrophysics concepts with spacecraft separations from 1000 km to 25,000 km, such as the Milli-Arc-Second Structure Imager (MASSIM) and the New Worlds Observer, and Heliophysics concepts for solar coronagraphs and X-ray imaging with smaller separations (50m-500m). All of these proposed missions require advances in guidance, navigation, and control (GNC) for precision formation flying. In particular, very precise astrometric alignment control and estimation is required for precise inertial pointing of the virtual space telescope to enable science imaging orders of magnitude better than can be achieved with conventional single spacecraft instruments. This work develops design architectures, algorithms, and performance analysis of proposed GNC systems for precision dual spacecraft astrometric alignment. These systems employ a variety of GNC sensors and actuators, including laser-based alignment and ranging systems, optical imaging sensors (e.g. guide star telescope), inertial measurement units (IMU), as well as microthruster and precision stabilized platforms. A comprehensive GNC performance analysis is given for Heliophysics dual spacecraft PFF imaging mission concept.

  11. Spacecraft Alignment Determination and Control for Dual Spacecraft Precision Formation Flying

    NASA Technical Reports Server (NTRS)

    Calhoun, Philip C.; Novo-Gradac, Anne-Marie; Shah, Neerav

    2017-01-01

    Many proposed formation flying missions seek to advance the state of the art in spacecraft science imaging by utilizing precision dual spacecraft formation flying to enable a virtual space telescope. Using precision dual spacecraft alignment, very long focal lengths can be achieved by locating the optics on one spacecraft and the detector on the other. Proposed science missions include astrophysics concepts with spacecraft separations from 1000 km to 25,000 km, such as the Milli-Arc-Second Structure Imager (MASSIM) and the New Worlds Observer, and Heliophysics concepts for solar coronagraphs and X-ray imaging with smaller separations (50m 500m). All of these proposed missions require advances in guidance, navigation, and control (GNC) for precision formation flying. In particular, very precise astrometric alignment control and estimation is required for precise inertial pointing of the virtual space telescope to enable science imaging orders of magnitude better than can be achieved with conventional single spacecraft instruments. This work develops design architectures, algorithms, and performance analysis of proposed GNC systems for precision dual spacecraft astrometric alignment. These systems employ a variety of GNC sensors and actuators, including laser-based alignment and ranging systems, optical imaging sensors (e.g. guide star telescope), inertial measurement units (IMU), as well as micro-thruster and precision stabilized platforms. A comprehensive GNC performance analysis is given for Heliophysics dual spacecraft PFF imaging mission concept.

  12. CLFs-based optimization control for a class of constrained visual servoing systems.

    PubMed

    Song, Xiulan; Miaomiao, Fu

    2017-03-01

    In this paper, we use the control Lyapunov function (CLF) technique to present an optimized visual servo control method for constrained eye-in-hand robot visual servoing systems. With the knowledge of camera intrinsic parameters and depth of target changes, visual servo control laws (i.e. translation speed) with adjustable parameters are derived by image point features and some known CLF of the visual servoing system. The Fibonacci method is employed to online compute the optimal value of those adjustable parameters, which yields an optimized control law to satisfy constraints of the visual servoing system. The Lyapunov's theorem and the properties of CLF are used to establish stability of the constrained visual servoing system in the closed-loop with the optimized control law. One merit of the presented method is that there is no requirement of online calculating the pseudo-inverse of the image Jacobian's matrix and the homography matrix. Simulation and experimental results illustrated the effectiveness of the method proposed here. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  13. An integral design strategy combining optical system and image processing to obtain high resolution images

    NASA Astrophysics Data System (ADS)

    Wang, Jiaoyang; Wang, Lin; Yang, Ying; Gong, Rui; Shao, Xiaopeng; Liang, Chao; Xu, Jun

    2016-05-01

    In this paper, an integral design that combines optical system with image processing is introduced to obtain high resolution images, and the performance is evaluated and demonstrated. Traditional imaging methods often separate the two technical procedures of optical system design and imaging processing, resulting in the failures in efficient cooperation between the optical and digital elements. Therefore, an innovative approach is presented to combine the merit function during optical design together with the constraint conditions of image processing algorithms. Specifically, an optical imaging system with low resolution is designed to collect the image signals which are indispensable for imaging processing, while the ultimate goal is to obtain high resolution images from the final system. In order to optimize the global performance, the optimization function of ZEMAX software is utilized and the number of optimization cycles is controlled. Then Wiener filter algorithm is adopted to process the image simulation and mean squared error (MSE) is taken as evaluation criterion. The results show that, although the optical figures of merit for the optical imaging systems is not the best, it can provide image signals that are more suitable for image processing. In conclusion. The integral design of optical system and image processing can search out the overall optimal solution which is missed by the traditional design methods. Especially, when designing some complex optical system, this integral design strategy has obvious advantages to simplify structure and reduce cost, as well as to gain high resolution images simultaneously, which has a promising perspective of industrial application.

  14. Development of integrated control system for smart factory in the injection molding process

    NASA Astrophysics Data System (ADS)

    Chung, M. J.; Kim, C. Y.

    2018-03-01

    In this study, we proposed integrated control system for automation of injection molding process required for construction of smart factory. The injection molding process consists of heating, tool close, injection, cooling, tool open, and take-out. Take-out robot controller, image processing module, and process data acquisition interface module are developed and assembled to integrated control system. By adoption of integrated control system, the injection molding process can be simplified and the cost for construction of smart factory can be inexpensive.

  15. Real-time feedback control of twin-screw wet granulation based on image analysis.

    PubMed

    Madarász, Lajos; Nagy, Zsombor Kristóf; Hoffer, István; Szabó, Barnabás; Csontos, István; Pataki, Hajnalka; Démuth, Balázs; Szabó, Bence; Csorba, Kristóf; Marosi, György

    2018-06-04

    The present paper reports the first dynamic image analysis-based feedback control of continuous twin-screw wet granulation process. Granulation of the blend of lactose and starch was selected as a model process. The size and size distribution of the obtained particles were successfully monitored by a process camera coupled with an image analysis software developed by the authors. The validation of the developed system showed that the particle size analysis tool can determine the size of the granules with an error of less than 5 µm. The next step was to implement real-time feedback control of the process by controlling the liquid feeding rate of the pump through a PC, based on the real-time determined particle size results. After the establishment of the feedback control, the system could correct different real-life disturbances, creating a Process Analytically Controlled Technology (PACT), which guarantees the real-time monitoring and controlling of the quality of the granules. In the event of changes or bad tendencies in the particle size, the system can automatically compensate the effect of disturbances, ensuring proper product quality. This kind of quality assurance approach is especially important in the case of continuous pharmaceutical technologies. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Vision Based Autonomous Robotic Control for Advanced Inspection and Repair

    NASA Technical Reports Server (NTRS)

    Wehner, Walter S.

    2014-01-01

    The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.

  17. Digital Image Display Control System, DIDCS. [for astronomical analysis

    NASA Technical Reports Server (NTRS)

    Fischel, D.; Klinglesmith, D. A., III

    1979-01-01

    DIDCS is an interactive image display and manipulation system that is used for a variety of astronomical image reduction and analysis operations. The hardware system consists of a PDP 11/40 main frame with 32K of 16-bit core memory; 96K of 16-bit MOS memory; two 9 track 800 BPI tape drives; eight 2.5 million byte RKO5 type disk packs, three user terminals, and a COMTAL 8000-S display system which has sufficient memory to store and display three 512 x 512 x 8 bit images along with an overlay plane and function table for each image, a pseudo color table and the capability for displaying true color. The software system is based around the language FORTH, which will permit an open ended dictionary of user level words for image analyses and display. A description of the hardware and software systems will be presented along with examples of the types of astronomical research that are being performed. Also a short discussion of the commonality and exchange of this type of image analysis system will be given.

  18. Co-robotic ultrasound imaging: a cooperative force control approach

    NASA Astrophysics Data System (ADS)

    Finocchi, Rodolfo; Aalamifar, Fereshteh; Fang, Ting Yun; Taylor, Russell H.; Boctor, Emad M.

    2017-03-01

    Ultrasound (US) imaging remains one of the most commonly used imaging modalities in medical practice. However, due to the physical effort required to perform US imaging tasks, 63-91% of ultrasonographers develop musculoskeletal disorders throughout their careers. The goal of this work is to provide ultrasonographers with a system that facilitates and reduces strain in US image acquisition. To this end, we propose a system for admittance force robot control that uses the six-degree-of-freedom UR5 industrial robot. A six-axis force sensor is used to measure the forces and torques applied by the sonographer on the probe. As the sonographer pushes against the US probe, the robot complies with these forces, following the user's desired path. A one-axis load cell is used to measure contact forces between the patient and the probe in real time. When imaging, the robot augments the axial forces applied by the user, lessening the physical effort required. User studies showed an overall decrease in hand tremor while imaging at high forces, improvements in image stability, and a decrease in difficulty and strenuousness.

  19. Development of patient-controlled respiratory gating system based on visual guidance for magnetic-resonance image-guided radiation therapy.

    PubMed

    Kim, Jung-In; Lee, Hanyoung; Wu, Hong-Gyun; Chie, Eui Kyu; Kang, Hyun-Cheol; Park, Jong Min

    2017-09-01

    The aim of this study is to develop a visual guidance patient-controlled (VG-PC) respiratory gating system for respiratory-gated magnetic-resonance image-guided radiation therapy (MR-IGRT) and to evaluate the performance of the developed system. The near-real-time cine planar MR image of a patient acquired during treatment was transmitted to a beam projector in the treatment room through an optical fiber cable. The beam projector projected the cine MR images inside the bore of the ViewRay system in order to be visible to a patient during treatment. With this visual information, patients voluntarily controlled their respiration to put the target volume into the gating boundary (gating window). The effect of the presence of the beam projector in the treatment room on the image quality of the MRI was investigated by evaluating the signal-to-noise ratio (SNR), uniformity, low-contrast detectability, high-contrast spatial resolution, and spatial integrity with the VG-PC gating system. To evaluate the performance of the developed system, we applied the VG-PC gating system to a total of seven patients; six patients received stereotactic ablative radiotherapy (SABR) and one patient received conventional fractionated radiation therapy. The projected cine MR images were visible even when the room light was on. No image data loss or additional time delay during delivery of image data were observed. Every indicator representing MRI quality, including SNR, uniformity, low-contrast detectability, high-contrast spatial resolution, and spatial integrity exhibited values higher than the tolerance levels of the manufacturer with the VG-PC gating system; therefore, the presence of the VG-PC gating system in the treatment room did not degrade the MR image quality. The average beam-off times due to respiratory gating with and without the VG-PC gating system were 830.3 ± 278.2 s and 1264.2 ± 302.1 s respectively (P = 0.005). Consequently, the total treatment times excluding the time for patient setup with and without the VG-PC gating system were 1453.3 ± 297.3 s and 1887.2 ± 469.6 s, respectively, on average (P = 0.005). The average number of beam-off events during whole treatment session was reduced from 457 ± 154 times to 195 ± 90 times by using the VG-PC gating system (P < 0.001). The developed system could improve treatment efficiency when performing respiratory-gated MR-IGRT. The VG-PC gating system could be applied to any kind of bore-type radiotherapy machine. © 2017 American Association of Physicists in Medicine.

  20. Expert system for controlling plant growth in a contained environment

    NASA Technical Reports Server (NTRS)

    May, George A. (Inventor); Lanoue, Mark Allen (Inventor); Bethel, Matthew (Inventor); Ryan, Robert E. (Inventor)

    2011-01-01

    In a system for optimizing crop growth, vegetation is cultivated in a contained environment, such as a greenhouse, an underground cavern or other enclosed space. Imaging equipment is positioned within or about the contained environment, to acquire spatially distributed crop growth information, and environmental sensors are provided to acquire data regarding multiple environmental conditions that can affect crop development. Illumination within the contained environment, and the addition of essential nutrients and chemicals are in turn controlled in response to data acquired by the imaging apparatus and environmental sensors, by an "expert system" which is trained to analyze and evaluate crop conditions. The expert system controls the spatial and temporal lighting pattern within the contained area, and the timing and allocation of nutrients and chemicals to achieve optimized crop development. A user can access the "expert system" remotely, to assess activity within the growth chamber, and can override the "expert system".

  1. Expert system for controlling plant growth in a contained environment

    NASA Technical Reports Server (NTRS)

    May, George A. (Inventor); Lanoue, Mark Allen (Inventor); Bethel, Matthew (Inventor); Ryan, Robert E. (Inventor)

    2009-01-01

    In a system for optimizing crop growth, vegetation is cultivated in a contained environment, such as a greenhouse, an underground cavern or other enclosed space. Imaging equipment is positioned within or about the contained environment, to acquire spatially distributed crop growth information, and environmental sensors are provided to acquire data regarding multiple environmental conditions that can affect crop development. Illumination within the contained environment, and the addition of essential nutrients and chemicals are in turn controlled in response to data acquired by the imaging apparatus and environmental sensors, by an ''expert system'' which is trained to analyze and evaluate crop conditions. The expert system controls the spatial and temporal lighting pattern within the contained area, and the timing and allocation of nutrients and chemicals to achieve optimized crop development. A user can access the ''expert system'' remotely, to assess activity within the growth chamber, and can override the ''expert system''.

  2. Design and application of an array extended blackbody

    NASA Astrophysics Data System (ADS)

    Zhang, Ya-zhou; Fan, Xiao-li; Lei, Hao; Zhou, Zhi-yuan

    2018-02-01

    An array extended blackbody is designed to quantitatively measure and evaluate the performance of infrared imaging systems. The theory, structure, control software and application of blackbody are introduced. The parameters of infrared imaging systems such as the maximum detectable range, detection sensitivity, spatial resolution and temperature resolution can be measured.

  3. A novel image encryption algorithm based on synchronized random bit generated in cascade-coupled chaotic semiconductor ring lasers

    NASA Astrophysics Data System (ADS)

    Li, Jiafu; Xiang, Shuiying; Wang, Haoning; Gong, Junkai; Wen, Aijun

    2018-03-01

    In this paper, a novel image encryption algorithm based on synchronization of physical random bit generated in a cascade-coupled semiconductor ring lasers (CCSRL) system is proposed, and the security analysis is performed. In both transmitter and receiver parts, the CCSRL system is a master-slave configuration consisting of a master semiconductor ring laser (M-SRL) with cross-feedback and a solitary SRL (S-SRL). The proposed image encryption algorithm includes image preprocessing based on conventional chaotic maps, pixel confusion based on control matrix extracted from physical random bit, and pixel diffusion based on random bit stream extracted from physical random bit. Firstly, the preprocessing method is used to eliminate the correlation between adjacent pixels. Secondly, physical random bit with verified randomness is generated based on chaos in the CCSRL system, and is used to simultaneously generate the control matrix and random bit stream. Finally, the control matrix and random bit stream are used for the encryption algorithm in order to change the position and the values of pixels, respectively. Simulation results and security analysis demonstrate that the proposed algorithm is effective and able to resist various typical attacks, and thus is an excellent candidate for secure image communication application.

  4. Color management systems: methods and technologies for increased image quality

    NASA Astrophysics Data System (ADS)

    Caretti, Maria

    1997-02-01

    All the steps in the imaging chain -- from handling the originals in the prepress to outputting them on any device - - have to be well calibrated and adjusted to each other, in order to reproduce color images in a desktop environment as accurate as possible according to the original. Today most of the steps in the prepress production are digital and therefore it is realistic to believe that the color reproduction can be well controlled. This is true thanks to the last years development of fast, cost effective scanners, digital sources and digital proofing devices not the least. It is likely to believe that well defined tools and methods to control this imaging flow will lead to large cost and time savings as well as increased overall image quality. Until now, there has been a lack of good, reliable, easy-to- use systems (e.g. hardware, software, documentation, training and support) in an extent that has made them accessible to the large group of users of graphic arts production systems. This paper provides an overview of the existing solutions to manage colors in a digital pre-press environment. Their benefits and limitations are discussed as well as how they affect the production workflow and organization. The difference between a color controlled environment and one that is not is explained.

  5. Dedicated hardware processor and corresponding system-on-chip design for real-time laser speckle imaging.

    PubMed

    Jiang, Chao; Zhang, Hongyan; Wang, Jia; Wang, Yaru; He, Heng; Liu, Rui; Zhou, Fangyuan; Deng, Jialiang; Li, Pengcheng; Luo, Qingming

    2011-11-01

    Laser speckle imaging (LSI) is a noninvasive and full-field optical imaging technique which produces two-dimensional blood flow maps of tissues from the raw laser speckle images captured by a CCD camera without scanning. We present a hardware-friendly algorithm for the real-time processing of laser speckle imaging. The algorithm is developed and optimized specifically for LSI processing in the field programmable gate array (FPGA). Based on this algorithm, we designed a dedicated hardware processor for real-time LSI in FPGA. The pipeline processing scheme and parallel computing architecture are introduced into the design of this LSI hardware processor. When the LSI hardware processor is implemented in the FPGA running at the maximum frequency of 130 MHz, up to 85 raw images with the resolution of 640×480 pixels can be processed per second. Meanwhile, we also present a system on chip (SOC) solution for LSI processing by integrating the CCD controller, memory controller, LSI hardware processor, and LCD display controller into a single FPGA chip. This SOC solution also can be used to produce an application specific integrated circuit for LSI processing.

  6. A Rotatable Quality Control Phantom for Evaluating the Performance of Flat Panel Detectors in Imaging Moving Objects.

    PubMed

    Haga, Yoshihiro; Chida, Koichi; Inaba, Yohei; Kaga, Yuji; Meguro, Taiichiro; Zuguchi, Masayuki

    2016-02-01

    As the use of diagnostic X-ray equipment with flat panel detectors (FPDs) has increased, so has the importance of proper management of FPD systems. To ensure quality control (QC) of FPD system, an easy method for evaluating FPD imaging performance for both stationary and moving objects is required. Until now, simple rotatable QC phantoms have not been available for the easy evaluation of the performance (spatial resolution and dynamic range) of FPD in imaging moving objects. We developed a QC phantom for this purpose. It consists of three thicknesses of copper and a rotatable test pattern of piano wires of various diameters. Initial tests confirmed its stable performance. Our moving phantom is very useful for QC of FPD images of moving objects because it enables visual evaluation of image performance (spatial resolution and dynamic range) easily.

  7. Image Formation in High Contrast Optical Systems: The Role of Polarization

    NASA Technical Reports Server (NTRS)

    Breckinridge, James B.

    2004-01-01

    To find evidence of life in the Universe outside our solar system is one of the most compelling and visionary adventures of the 21st century. The technologies to create the telescopes and instruments that will enable this discovery are now within the grasp of mankind. Direct imaging of a very faint planet around a neighboring bright star requires high contrast or a hypercontrast optical imaging system capable of controlling unwanted radiation within the system to one part in ten to the 11th. This paper identifies several physical phenomena that affect image quality in high contrast imaging systems. Polarization induced at curved metallic surfaces and by anisotropy in the deposition process (Smith-Purcell effect) along with beam shifts introduced by the Goos-Hachen effect are discussed. A typical configuration is analyzed, and technical risk mitigation concepts are discussed.

  8. Development of measurement system for gauge block interferometer

    NASA Astrophysics Data System (ADS)

    Chomkokard, S.; Jinuntuya, N.; Wongkokua, W.

    2017-09-01

    We developed a measurement system for collecting and analyzing the fringe pattern images from a gauge block interferometer. The system was based on Raspberry Pi which is an open source system with python programming and opencv image manipulation library. The images were recorded by the Raspberry Pi camera with five-megapixel capacity. The noise of images was suppressed for the best result in analyses. The low noise images were processed to find the edge of fringe patterns using the contour technique for the phase shift analyses. We tested our system with the phase shift patterns between a gauge block and a reference plate. The phase shift patterns were measured by a Twyman-Green type of interferometer using the He-Ne laser with the temperature controlled at 20.0 °C. The results of the measurement will be presented and discussed.

  9. Real-time feedback for spatiotemporal field stabilization in MR systems.

    PubMed

    Duerst, Yolanda; Wilm, Bertram J; Dietrich, Benjamin E; Vannesjo, S Johanna; Barmet, Christoph; Schmid, Thomas; Brunner, David O; Pruessmann, Klaas P

    2015-02-01

    MR imaging and spectroscopy require a highly stable, uniform background field. The field stability is typically limited by hardware imperfections, external perturbations, or field fluctuations of physiological origin. The purpose of the present work is to address these issues by introducing spatiotemporal field stabilization based on real-time sensing and feedback control. An array of NMR field probes is used to sense the field evolution in a whole-body MR system concurrently with regular system operation. The field observations serve as inputs to a proportional-integral controller that governs correction currents in gradient and higher-order shim coils such as to keep the field stable in a volume of interest. The feedback system was successfully set up, currently reaching a minimum latency of 20 ms. Its utility is first demonstrated by countering thermal field drift during an EPI protocol. It is then used to address respiratory field fluctuations in a T2 *-weighted brain exam, resulting in substantially improved image quality. Feedback field control is an effective means of eliminating dynamic field distortions in MR systems. Third-order spatial control at an update time of 100 ms has proven sufficient to largely eliminate thermal and breathing effects in brain imaging at 7 Tesla. © 2014 Wiley Periodicals, Inc.

  10. Handheld nonlinear microscope system comprising a 2 MHz repetition rate, mode-locked Yb-fiber laser for in vivo biomedical imaging

    PubMed Central

    Krolopp, Ádám; Csákányi, Attila; Haluszka, Dóra; Csáti, Dániel; Vass, Lajos; Kolonics, Attila; Wikonkál, Norbert; Szipőcs, Róbert

    2016-01-01

    A novel, Yb-fiber laser based, handheld 2PEF/SHG microscope imaging system is introduced. It is suitable for in vivo imaging of murine skin at an average power level as low as 5 mW at 200 kHz sampling rate. Amplified and compressed laser pulses having a spectral bandwidth of 8 to 12 nm at around 1030 nm excite the biological samples at a ~1.89 MHz repetition rate, which explains how the high quality two-photon excitation fluorescence (2PEF) and second harmonic generation (SHG) images are obtained at the average power level of a laser pointer. The scanning, imaging and detection head, which comprises a conventional microscope objective for beam focusing, has a physical length of ~180 mm owing to the custom designed imaging telescope system between the laser scanner mirrors and the entrance aperture of the microscope objective. Operation of the all-fiber, all-normal dispersion Yb-fiber ring laser oscillator is electronically controlled by a two-channel polarization controller for Q-switching free mode-locked operation. The whole nonlinear microscope imaging system has the main advantages of the low price of the fs laser applied, fiber optics flexibility, a relatively small, light-weight scanning and detection head, and a very low risk of thermal or photochemical damage of the skin samples. PMID:27699118

  11. Handheld nonlinear microscope system comprising a 2 MHz repetition rate, mode-locked Yb-fiber laser for in vivo biomedical imaging.

    PubMed

    Krolopp, Ádám; Csákányi, Attila; Haluszka, Dóra; Csáti, Dániel; Vass, Lajos; Kolonics, Attila; Wikonkál, Norbert; Szipőcs, Róbert

    2016-09-01

    A novel, Yb-fiber laser based, handheld 2PEF/SHG microscope imaging system is introduced. It is suitable for in vivo imaging of murine skin at an average power level as low as 5 mW at 200 kHz sampling rate. Amplified and compressed laser pulses having a spectral bandwidth of 8 to 12 nm at around 1030 nm excite the biological samples at a ~1.89 MHz repetition rate, which explains how the high quality two-photon excitation fluorescence (2PEF) and second harmonic generation (SHG) images are obtained at the average power level of a laser pointer. The scanning, imaging and detection head, which comprises a conventional microscope objective for beam focusing, has a physical length of ~180 mm owing to the custom designed imaging telescope system between the laser scanner mirrors and the entrance aperture of the microscope objective. Operation of the all-fiber, all-normal dispersion Yb-fiber ring laser oscillator is electronically controlled by a two-channel polarization controller for Q-switching free mode-locked operation. The whole nonlinear microscope imaging system has the main advantages of the low price of the fs laser applied, fiber optics flexibility, a relatively small, light-weight scanning and detection head, and a very low risk of thermal or photochemical damage of the skin samples.

  12. Image dissector control and data system, part 1. [instrument packages and equipment specifications

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A general description of the image dissector control and data system is presented along with detailed design information, operating instructions, and maintenance and trouble-shooting procedures for the four instrumentation packages. The four instrumentation packages include a 90 inch telescope, a simplified telescope module for use on the 90 inch or other telescopes, a photographic plate scanner module which permits the scanning of astronomical photographic plates in the laboratory, and the lunar experiment package module.

  13. Design of a video capsule endoscopy system with low-power ASIC for monitoring gastrointestinal tract.

    PubMed

    Liu, Gang; Yan, Guozheng; Zhu, Bingquan; Lu, Li

    2016-11-01

    In recent years, wireless capsule endoscopy (WCE) has been a state-of-the-art tool to examine disorders of the human gastrointestinal tract painlessly. However, system miniaturization, enhancement of the image-data transfer rate and power consumption reduction for the capsule are still key challenges. In this paper, a video capsule endoscopy system with a low-power controlling and processing application-specific integrated circuit (ASIC) is designed and fabricated. In the design, these challenges are resolved by employing a microimage sensor, a novel radio frequency transmitter with an on-off keying modulation rate of 20 Mbps, and an ASIC structure that includes a clock management module, a power-efficient image compression module and a power management unit. An ASIC-based prototype capsule, which measures Φ11 mm × 25 mm, has been developed here. Test results show that the designed ASIC consumes much less power than most of the other WCE systems and that its total power consumption per frame is the least. The image compression module can realize high near-lossless compression rate (3.69) and high image quality (46.2 dB). The proposed system supports multi-spectral imaging, including white light imaging and autofluorescence imaging, at a maximum frame rate of 24 fps and with a resolution of 400 × 400. Tests and in vivo trials in pigs have proved the feasibility of the entire system, but further improvements in capsule control and compression performance inside the ASIC are needed in the future.

  14. Method for enhanced control of welding processes

    DOEpatents

    Sheaffer, Donald A.; Renzi, Ronald F.; Tung, David M.; Schroder, Kevin

    2000-01-01

    Method and system for producing high quality welds in welding processes, in general, and gas tungsten arc (GTA) welding, in particular by controlling weld penetration. Light emitted from a weld pool is collected from the backside of a workpiece by optical means during welding and transmitted to a digital video camera for further processing, after the emitted light is first passed through a short wavelength pass filter to remove infrared radiation. By filtering out the infrared component of the light emitted from the backside weld pool image, the present invention provides for the accurate determination of the weld pool boundary. Data from the digital camera is fed to an imaging board which focuses on a 100.times.100 pixel portion of the image. The board performs a thresholding operation and provides this information to a digital signal processor to compute the backside weld pool dimensions and area. This information is used by a control system, in a dynamic feedback mode, to automatically adjust appropriate parameters of a welding system, such as the welding current, to control weld penetration and thus, create a uniform weld bead and high quality weld.

  15. s98e09732

    NASA Image and Video Library

    1998-11-01

    S98-E-09732 (Nov. 1998) --- Closeup view of part of the antenna system for the Teleoperator Control System (TORU) manual docking system on Zarya. This photograph was taken prior to Zarya's deployment. Recent activities showed an indication of a possible failure to deploy of two small antennae elements in the TORU. Accompanying image shows pre-flight closeout closeup image of the second small element.

  16. Mechanisms test bed math model modification and simulation support

    NASA Technical Reports Server (NTRS)

    Gilchrist, Andrea C.; Tobbe, Patrick A.

    1995-01-01

    This report summarizes the work performed under contract NAS8-38771 in support of the Marshall Space Flight Center Six Degree of Freedom Motion Facility and Flight Robotics Laboratory. The contract activities included the development of the two flexible body and Remote Manipulator System simulations, Dynamic Overhead Target Simulator control system and operating software, Global Positioning System simulation, and Manipulator Coupled Spacecraft Controls Testbed. Technical support was also provided for the Lightning Imaging Sensor and Solar X-Ray Imaging programs. The cover sheets and introductory sections for the documentation written under this contract are provided as an appendix.

  17. Flexible real-time magnetic resonance imaging framework.

    PubMed

    Santos, Juan M; Wright, Graham A; Pauly, John M

    2004-01-01

    The extension of MR imaging to new applications has demonstrated the limitations of the architecture of current real-time systems. Traditional real-time implementations provide continuous acquisition of data and modification of basic sequence parameters on the fly. We have extended the concept of real-time MRI by designing a system that drives the examinations from a real-time localizer and then gets reconfigured for different imaging modes. Upon operator request or automatic feedback the system can immediately generate a new pulse sequence or change fundamental aspects of the acquisition such as gradient waveforms excitation pulses and scan planes. This framework has been implemented by connecting a data processing and control workstation to a conventional clinical scanner. Key components on the design of this framework are the data communication and control mechanisms, reconstruction algorithms optimized for real-time and adaptability, flexible user interface and extensible user interaction. In this paper we describe the various components that comprise this system. Some of the applications implemented in this framework include real-time catheter tracking embedded in high frame rate real-time imaging and immediate switching between real-time localizer and high-resolution volume imaging for coronary angiography applications.

  18. An adaptive toolkit for image quality evaluation in system performance test of digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Zhang, Guozhi; Petrov, Dimitar; Marshall, Nicholas; Bosmans, Hilde

    2017-03-01

    Digital breast tomosynthesis (DBT) is a relatively new diagnostic imaging modality for women. Currently, various models of DBT systems are available on the market and the number of installations is rapidly increasing. EUREF, the European Reference Organization for Quality Assured Breast Screening and Diagnostic Services, has proposed a preliminary Guideline - protocol for the quality control of the physical and technical aspects of digital breast tomosynthesis systems, with an ultimate aim of providing limiting values guaranteeing proper performance for different applications of DBT. In this work, we introduce an adaptive toolkit developed in accordance with this guideline to facilitate the process of image quality evaluation in DBT performance test. This toolkit implements robust algorithms to quantify various technical parameters of DBT images and provides a convenient user interface in practice. Each test is built into a separate module with configurations set corresponding to the European guideline, which can be easily adapted to different settings and extended with additional tests. This toolkit largely improves the efficiency for image quality evaluation of DBT. It is also going to evolve with the development of protocols in quality control of DBT systems.

  19. Point-and-stare operation and high-speed image acquisition in real-time hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Driver, Richard D.; Bannon, David P.; Ciccone, Domenic; Hill, Sam L.

    2010-04-01

    The design and optical performance of a small-footprint, low-power, turnkey, Point-And-Stare hyperspectral analyzer, capable of fully automated field deployment in remote and harsh environments, is described. The unit is packaged for outdoor operation in an IP56 protected air-conditioned enclosure and includes a mechanically ruggedized fully reflective, aberration-corrected hyperspectral VNIR (400-1000 nm) spectrometer with a board-level detector optimized for point and stare operation, an on-board computer capable of full system data-acquisition and control, and a fully functioning internal hyperspectral calibration system for in-situ system spectral calibration and verification. Performance data on the unit under extremes of real-time survey operation and high spatial and high spectral resolution will be discussed. Hyperspectral acquisition including full parameter tracking is achieved by the addition of a fiber-optic based downwelling spectral channel for solar illumination tracking during hyperspectral acquisition and the use of other sensors for spatial and directional tracking to pinpoint view location. The system is mounted on a Pan-And-Tilt device, automatically controlled from the analyzer's on-board computer, making the HyperspecTM particularly adaptable for base security, border protection and remote deployments. A hyperspectral macro library has been developed to control hyperspectral image acquisition, system calibration and scene location control. The software allows the system to be operated in a fully automatic mode or under direct operator control through a GigE interface.

  20. Calibration, Projection, and Final Image Products of MESSENGER's Mercury Dual Imaging System

    NASA Astrophysics Data System (ADS)

    Denevi, Brett W.; Chabot, Nancy L.; Murchie, Scott L.; Becker, Kris J.; Blewett, David T.; Domingue, Deborah L.; Ernst, Carolyn M.; Hash, Christopher D.; Hawkins, S. Edward; Keller, Mary R.; Laslo, Nori R.; Nair, Hari; Robinson, Mark S.; Seelos, Frank P.; Stephens, Grant K.; Turner, F. Scott; Solomon, Sean C.

    2018-02-01

    We present an overview of the operations, calibration, geodetic control, photometric standardization, and processing of images from the Mercury Dual Imaging System (MDIS) acquired during the orbital phase of the MESSENGER spacecraft's mission at Mercury (18 March 2011-30 April 2015). We also provide a summary of all of the MDIS products that are available in NASA's Planetary Data System (PDS). Updates to the radiometric calibration included slight modification of the frame-transfer smear correction, updates to the flat fields of some wide-angle camera (WAC) filters, a new model for the temperature dependence of narrow-angle camera (NAC) and WAC sensitivity, and an empirical correction for temporal changes in WAC responsivity. Further, efforts to characterize scattered light in the WAC system are described, along with a mosaic-dependent correction for scattered light that was derived for two regional mosaics. Updates to the geometric calibration focused on the focal lengths and distortions of the NAC and all WAC filters, NAC-WAC alignment, and calibration of the MDIS pivot angle and base. Additionally, two control networks were derived so that the majority of MDIS images can be co-registered with sub-pixel accuracy; the larger of the two control networks was also used to create a global digital elevation model. Finally, we describe the image processing and photometric standardization parameters used in the creation of the MDIS advanced products in the PDS, which include seven large-scale mosaics, numerous targeted local mosaics, and a set of digital elevation models ranging in scale from local to global.

  1. A Work Station For Control Of Changing Systems

    NASA Technical Reports Server (NTRS)

    Mandl, Daniel J.

    1988-01-01

    Touch screen and microcomputer enable flexible control of complicated systems. Computer work station equipped to produce graphical displays used as command panel and status indicator for command-and-control system. Operator uses images of control buttons displayed on touch screen to send prestored commands. Use of prestored library of commands reduces incidence of errors. If necessary, operator uses conventional keyboard to enter commands in real time to handle unforeseeable situations.

  2. Development of a simultaneous optical/PET imaging system for awake mice

    NASA Astrophysics Data System (ADS)

    Takuwa, Hiroyuki; Ikoma, Yoko; Yoshida, Eiji; Tashima, Hideaki; Wakizaka, Hidekatsu; Shinaji, Tetsuya; Yamaya, Taiga

    2016-09-01

    Simultaneous measurements of multiple physiological parameters are essential for the study of brain disease mechanisms and the development of suitable therapies to treat them. In this study, we developed a measurement system for simultaneous optical imaging and PET for awake mice. The key elements of this system are the OpenPET, optical imaging and fixation apparatus for an awake mouse. The OpenPET is our original open-type PET geometry, which can be used in combination with another device because of the easily accessible open space of the former. A small prototype of the axial shift single-ring OpenPET was used. The objective lens for optical imaging with a mounted charge-coupled device camera was placed inside the open space of the AS-SROP. Our original fixation apparatus to hold an awake mouse was also applied. As a first application of this system, simultaneous measurements of cerebral blood flow (CBF) by laser speckle imaging (LSI) and [11C]raclopride-PET were performed under control and 5% CO2 inhalation (hypercapnia) conditions. Our system successfully obtained the CBF and [11C]raclopride radioactivity concentration simultaneously. Accumulation of [11C]raclopride was observed in the striatum where the density of dopamine D2 receptors is high. LSI measurements could be stably performed for more than 60 minutes. Increased CBF induced by hypercapnia was observed while CBF under the control condition was stable. We concluded that our imaging system should be useful for investigating the mechanisms of brain diseases in awake animal models.

  3. Realization of the ergonomics design and automatic control of the fundus cameras

    NASA Astrophysics Data System (ADS)

    Zeng, Chi-liang; Xiao, Ze-xin; Deng, Shi-chao; Yu, Xin-ye

    2012-12-01

    The principles of ergonomics design in fundus cameras should be extending the agreeableness by automatic control. Firstly, a 3D positional numerical control system is designed for positioning the eye pupils of the patients who are doing fundus examinations. This system consists of a electronically controlled chin bracket for moving up and down, a lateral movement of binocular with the detector and the automatic refocusing of the edges of the eye pupils. Secondly, an auto-focusing device for the object plane of patient's fundus is designed, which collects the patient's fundus images automatically whether their eyes is ametropic or not. Finally, a moving visual target is developed for expanding the fields of the fundus images.

  4. CMOS active pixel sensor type imaging system on a chip

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Nixon, Robert (Inventor)

    2011-01-01

    A single chip camera which includes an .[.intergrated.]. .Iadd.integrated .Iaddend.image acquisition portion and control portion and which has double sampling/noise reduction capabilities thereon. Part of the .[.intergrated.]. .Iadd.integrated .Iaddend.structure reduces the noise that is picked up during imaging.

  5. Development of a user-friendly system for image processing of electron microscopy by integrating a web browser and PIONE with Eos.

    PubMed

    Tsukamoto, Takafumi; Yasunaga, Takuo

    2014-11-01

    Eos (Extensible object-oriented system) is one of the powerful applications for image processing of electron micrographs. In usual cases, Eos works with only character user interfaces (CUI) under the operating systems (OS) such as OS-X or Linux, not user-friendly. Thus, users of Eos need to be expert at image processing of electron micrographs, and have a little knowledge of computer science, as well. However, all the persons who require Eos does not an expert for CUI. Thus we extended Eos to a web system independent of OS with graphical user interfaces (GUI) by integrating web browser.Advantage to use web browser is not only to extend Eos with GUI, but also extend Eos to work under distributed computational environment. Using Ajax (Asynchronous JavaScript and XML) technology, we implemented more comfortable user-interface on web browser. Eos has more than 400 commands related to image processing for electron microscopy, and the usage of each command is different from each other. Since the beginning of development, Eos has managed their user-interface by using the interface definition file of "OptionControlFile" written in CSV (Comma-Separated Value) format, i.e., Each command has "OptionControlFile", which notes information for interface and its usage generation. Developed GUI system called "Zephyr" (Zone for Easy Processing of HYpermedia Resources) also accessed "OptionControlFIle" and produced a web user-interface automatically, because its mechanism is mature and convenient,The basic actions of client side system was implemented properly and can supply auto-generation of web-form, which has functions of execution, image preview, file-uploading to a web server. Thus the system can execute Eos commands with unique options for each commands, and process image analysis. There remain problems of image file format for visualization and workspace for analysis: The image file format information is useful to check whether the input/output file is correct and we also need to provide common workspace for analysis because the client is physically separated from a server. We solved the file format problem by extension of rules of OptionControlFile of Eos. Furthermore, to solve workspace problems, we have developed two type of system. The first system is to use only local environments. The user runs a web server provided by Eos, access to a web client through a web browser, and manipulate the local files with GUI on the web browser. The second system is employing PIONE (Process-rule for Input/Output Negotiation Environment), which is our developing platform that works under heterogenic distributed environment. The users can put their resources, such as microscopic images, text files and so on, into the server-side environment supported by PIONE, and so experts can write PIONE rule definition, which defines a workflow of image processing. PIONE run each image processing on suitable computers, following the defined rule. PIONE has the ability of interactive manipulation, and user is able to try a command with various setting values. In this situation, we contribute to auto-generation of GUI for a PIONE workflow.As advanced functions, we have developed a module to log user actions. The logs include information such as setting values in image processing, procedure of commands and so on. If we use the logs effectively, we can get a lot of advantages. For example, when an expert may discover some know-how of image processing, other users can also share logs including his know-hows and so we may obtain recommendation workflow of image analysis, if we analyze logs. To implement social platform of image processing for electron microscopists, we have developed system infrastructure, as well. © The Author 2014. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Passive lighting responsive three-dimensional integral imaging

    NASA Astrophysics Data System (ADS)

    Lou, Yimin; Hu, Juanmei

    2017-11-01

    A three dimensional (3D) integral imaging (II) technique with a real-time passive lighting responsive ability and vivid 3D performance has been proposed and demonstrated. Some novel lighting responsive phenomena, including light-activated 3D imaging, and light-controlled 3D image scaling and translation, have been realized optically without updating images. By switching the on/off state of a point light source illuminated on the proposed II system, the 3D images can show/hide independent of the diffused illumination background. By changing the position or illumination direction of the point light source, the position and magnification of the 3D image can be modulated in real time. The lighting responsive mechanism of the 3D II system is deduced analytically and verified experimentally. A flexible thin film lighting responsive II system with a 0.4 mm thickness was fabricated. This technique gives some additional degrees of freedom in order to design the II system and enable the virtual 3D image to interact with the real illumination environment in real time.

  7. High-speed reconstruction of compressed images

    NASA Astrophysics Data System (ADS)

    Cox, Jerome R., Jr.; Moore, Stephen M.

    1990-07-01

    A compression scheme is described that allows high-definition radiological images with greater than 8-bit intensity resolution to be represented by 8-bit pixels. Reconstruction of the images with their original intensity resolution can be carried out by means of a pipeline architecture suitable for compact, high-speed implementation. A reconstruction system is described that can be fabricated according to this approach and placed between an 8-bit display buffer and the display's video system thereby allowing contrast control of images at video rates. Results for 50 CR chest images are described showing that error-free reconstruction of the original 10-bit CR images can be achieved.

  8. Computer Vision for Artificially Intelligent Robotic Systems

    NASA Astrophysics Data System (ADS)

    Ma, Chialo; Ma, Yung-Lung

    1987-04-01

    In this paper An Acoustic Imaging Recognition System (AIRS) will be introduced which is installed on an Intelligent Robotic System and can recognize different type of Hand tools' by Dynamic pattern recognition. The dynamic pattern recognition is approached by look up table method in this case, the method can save a lot of calculation time and it is practicable. The Acoustic Imaging Recognition System (AIRS) is consist of four parts -- position control unit, pulse-echo signal processing unit, pattern recognition unit and main control unit. The position control of AIRS can rotate an angle of ±5 degree Horizental and Vertical seperately, the purpose of rotation is to find the maximum reflection intensity area, from the distance, angles and intensity of the target we can decide the characteristic of this target, of course all the decision is target, of course all the decision is processed bye the main control unit. In Pulse-Echo Signal Process Unit, we ultilize the correlation method, to overcome the limitation of short burst of ultrasonic, because the Correlation system can transmit large time bandwidth signals and obtain their resolution and increased intensity through pulse compression in the correlation receiver. The output of correlator is sampled and transfer into digital data by u law coding method, and this data together with delay time T, angle information OH, eV will be sent into main control unit for further analysis. The recognition process in this paper, we use dynamic look up table method, in this method at first we shall set up serval recognition pattern table and then the new pattern scanned by Transducer array will be devided into serval stages and compare with the sampling table. The comparison is implemented by dynamic programing and Markovian process. All the hardware control signals, such as optimum delay time for correlator receiver, horizental and vertical rotation angle for transducer plate, are controlled by the Main Control Unit, the Main Control Unit also handles the pattern recognition process. The distance from the target to the transducer plate is limitted by the power and beam angle of transducer elements, in this AIRS Model, we use a narrow beam transducer and it's input voltage is 50V p-p. A RobOt equipped with AIRS can not only measure the distance from the target but also recognize a three dimensional image of target from the image lab of Robot memory. Indexitems, Accoustic System, Supersonic transducer, Dynamic programming, Look-up-table, Image process, pattern Recognition, Quad Tree, Quadappoach.

  9. Modified modular imaging system designed for a sounding rocket experiment

    NASA Astrophysics Data System (ADS)

    Veach, Todd J.; Scowen, Paul A.; Beasley, Matthew; Nikzad, Shouleh

    2012-09-01

    We present the design and system calibration results from the fabrication of a charge-coupled device (CCD) based imaging system designed using a modified modular imager cell (MIC) used in an ultraviolet sounding rocket mission. The heart of the imaging system is the MIC, which provides the video pre-amplifier circuitry and CCD clock level filtering. The MIC is designed with standard four-layer FR4 printed circuit board (PCB) with surface mount and through-hole components for ease of testing and lower fabrication cost. The imager is a 3.5k by 3.5k LBNL p-channel CCD with enhanced quantum efficiency response in the UV using delta-doping technology at JPL. The recently released PCIe/104 Small-Cam CCD controller from Astronomical Research Cameras, Inc (ARC) performs readout of the detector. The PCIe/104 Small-Cam system has the same capabilities as its larger PCI brethren, but in a smaller form factor, which makes it ideally suited for sub-orbital ballistic missions. The overall control is then accomplished using a PCIe/104 computer from RTD Embedded Technologies, Inc. The design, fabrication, and testing was done at the Laboratory for Astronomical and Space Instrumentation (LASI) at Arizona State University. Integration and flight calibration are to be completed at the University of Colorado Boulder before integration into CHESS.

  10. Hardware Timestamping for an Image Acquisition System Based on FlexRIO and IEEE 1588 v2 Standard

    NASA Astrophysics Data System (ADS)

    Esquembri, S.; Sanz, D.; Barrera, E.; Ruiz, M.; Bustos, A.; Vega, J.; Castro, R.

    2016-02-01

    Current fusion devices usually implement distributed acquisition systems for the multiple diagnostics of their experiments. However, each diagnostic is composed by hundreds or even thousands of signals, including images from the vessel interior. These signals and images must be correctly timestamped, because all the information will be analyzed to identify plasma behavior using temporal correlations. For acquisition devices without synchronization mechanisms the timestamp is given by another device with timing capabilities when signaled by the first device. Later, each data should be related with its timestamp, usually via software. This critical action is unfeasible for software applications when sampling rates are high. In order to solve this problem this paper presents the implementation of an image acquisition system with real-time hardware timestamping mechanism. This is synchronized with a master clock using the IEEE 1588 v2 Precision Time Protocol (PTP). Synchronization, image acquisition and processing, and timestamping mechanisms are implemented using Field Programmable Gate Array (FPGA) and a timing card -PTP v2 synchronized. The system has been validated using a camera simulator streaming videos from fusion databases. The developed architecture is fully compatible with ITER Fast Controllers and has been integrated with EPICS to control and monitor the whole system.

  11. Advanced visualization platform for surgical operating room coordination: distributed video board system.

    PubMed

    Hu, Peter F; Xiao, Yan; Ho, Danny; Mackenzie, Colin F; Hu, Hao; Voigt, Roger; Martz, Douglas

    2006-06-01

    One of the major challenges for day-of-surgery operating room coordination is accurate and timely situation awareness. Distributed and secure real-time status information is key to addressing these challenges. This article reports on the design and implementation of a passive status monitoring system in a 19-room surgical suite of a major academic medical center. Key design requirements considered included integrated real-time operating room status display, access control, security, and network impact. The system used live operating room video images and patient vital signs obtained through monitors to automatically update events and operating room status. Images were presented on a "need-to-know" basis, and access was controlled by identification badge authorization. The system delivered reliable real-time operating room images and status with acceptable network impact. Operating room status was visualized at 4 separate locations and was used continuously by clinicians and operating room service providers to coordinate operating room activities.

  12. Estimate of the effect of micro-vibration on the performance of the Algerian satellite (Alsat-1B) imager

    NASA Astrophysics Data System (ADS)

    Serief, Chahira

    2017-11-01

    Alsat-1B, launched into a 670 km sun-synchronous orbit on board the PSLV launch vehicle from the Sriharikota launch site in India on 26 September 2016, is a medium resolution Earth Observation satellite with a mass of 100 kg. Alsat-1B will be used for agricultural and resource monitoring, disaster management, land use mapping and urban planning. It is based on the SSTL-100 platform, and flies a 24 m multispectral imager and a 12 m panchromatic imager delivering images with a swath width of 140 km. One of the main factors affecting the performance of satellite-borne optical imaging systems is micro-vibration. Micro-vibration is a low level mechanical disturbance inevitably generated from moving parts on a satellite and exceptionally difficult to be controlled by the attitude and orbital control system (AOCS) of a spacecraft. Micro-vibration usually causes problems for optical imaging systems onboard Earth Observation satellites. The major effect of micro-vibration is the excitation of the support structures for the optical elements during imaging operations which can result in severe degradation of image quality by smearing and distortion. Quantitative characterization of image degradation caused by micro-vibration is thus quite useful and important as part of system level analysis which can help preventing micro-vibration influence by proper design and restoring the degraded image. The aim of this work is to provide quantitative estimates of the effect of micro-vibration on the performance of Alsat-1B imager, which may be experienced operationally, in terms of the modulation transfer function (MTF) and based on ground micro-vibration tests results.

  13. Radiographic Local Tumor Control and Pain Palliation of Sarcoma Metastases within the Musculoskeletal System with Percutaneous Thermal Ablation.

    PubMed

    Vaswani, Devin; Wallace, Adam N; Eiswirth, Preston S; Madaelil, Thomas P; Chang, Randy O; Tomasian, Anderanik; Jennings, Jack W

    2018-03-14

    To evaluate the effectiveness of percutaneous image-guided thermal ablation in achieving local tumor control and pain palliation of sarcoma metastases within the musculoskeletal system. Retrospective review of 64 sarcoma metastases within the musculoskeletal system in 26 women and 15 men (total = 41) treated with ablation between December 2011 and August 2016 was performed. Mean age of the cohort was 42.9 years ± 16.0 years. Two subgroups were treated: oligometastatic disease (n = 13) and widely metastatic disease (n = 51). A variety of sarcoma histologies were treated with average tumor volume of 42.5 cm 3 (range 0.1-484.7 cm 3 ). Pain scores were recorded before and 4 weeks after therapy for 59% (38/64) of treated lesions. Follow-up imaging was evaluated for local control and to monitor sites of untreated disease as an internal control. Fifty-eight percent (37/64) were lost to imaging follow-up at varying time points over a year. Complication rate was 5% (3/64; one minor and two major events). One-year local tumor control rates were 70% (19/27) in all patients, 67% (12/18) in the setting of progression of untreated metastases, and 100% (10/10) in the setting of oligometastatic disease. Median pain scores decreased from 8 (interquartile range 5.0-9.0) to 3 (interquartile range 0.1-4.0) 1 month after the procedure (P < 0.001). Image-guided percutaneous ablation is an effective option for local tumor control and pain palliation of metastatic sarcomas within the musculoskeletal system. Treatment in the setting of oligometastatic disease offers potential for remission. Level 4, Retrospective Review.

  14. Analysis of Interactive Graphics Display Equipment for an Automated Photo Interpretation System.

    DTIC Science & Technology

    1982-06-01

    System provides the hardware and software for a range of graphics processor tasks. The IMAGE System employs the RSX- II M real - time operating . system in...One hard copy unit serves up to four work stations. The executive program of the IMAGE system is the DEC RSX- 11 M real - time operating system . In...picture controller. The PDP 11/34 executes programs concurrently under the RSX- I IM real - time operating system . Each graphics program consists of a

  15. An integrated optical coherence microscopy imaging and optical stimulation system for optogenetic pacing in Drosophila melanogaster (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Alex, Aneesh; Li, Airong; Men, Jing; Jerwick, Jason; Tanzi, Rudolph E.; Zhou, Chao

    2016-03-01

    Electrical stimulation is the clinical standard for cardiac pacing. Although highly effective in controlling cardiac rhythm, the invasive nature, non-specificity to cardiac tissues and possible tissue damage limits its applications. Optogenetic pacing of the heart is a promising alternative, which is non-invasive and more specific, has high spatial and temporal precision, and avoids the shortcomings in electrical stimulation. Drosophila melanogaster, which is a powerful model organism with orthologs of nearly 75% of human disease genes, has not been studied for optogenetic pacing in the heart. Here, we developed a non-invasive integrated optical pacing and optical coherence microscopy (OCM) imaging system to control the heart rhythm of Drosophila at different developmental stages using light. The OCM system is capable of providing high imaging speed (130 frames/s) and ultrahigh imaging resolutions (1.5 μm and 3.9 μm for axial and transverse resolutions, respectively). A light-sensitive pacemaker was developed in Drosophila by specifically expressing the light-gated cation channel, channelrhodopsin-2 (ChR2) in transgenic Drosophila heart. We achieved non-invasive and specific optical control of the Drosophila heart rhythm throughout the fly's life cycle (larva, pupa, and adult) by stimulating the heart with 475 nm pulsed laser light. Heart response to stimulation pulses was monitored non-invasively with OCM. This integrated non-invasive optogenetic control and in vivo imaging technique provides a novel platform for performing research studies in developmental cardiology.

  16. Three-dimensional imaging and remote sensing imaging; Proceedings of the Meeting, Los Angeles, CA, Jan. 14, 15, 1988

    NASA Astrophysics Data System (ADS)

    Robbins, Woodrow E.

    1988-01-01

    The present conference discusses topics in novel technologies and techniques of three-dimensional imaging, human factors-related issues in three-dimensional display system design, three-dimensional imaging applications, and image processing for remote sensing. Attention is given to a 19-inch parallactiscope, a chromostereoscopic CRT-based display, the 'SpaceGraph' true three-dimensional peripheral, advantages of three-dimensional displays, holographic stereograms generated with a liquid crystal spatial light modulator, algorithms and display techniques for four-dimensional Cartesian graphics, an image processing system for automatic retina diagnosis, the automatic frequency control of a pulsed CO2 laser, and a three-dimensional display of magnetic resonance imaging of the spine.

  17. High-Spatial- and High-Temporal-Resolution Dynamic Contrast-enhanced MR Breast Imaging with Sweep Imaging with Fourier Transformation: A Pilot Study

    PubMed Central

    Benson, John C.; Idiyatullin, Djaudat; Snyder, Angela L.; Snyder, Carl J.; Hutter, Diane; Everson, Lenore I.; Eberly, Lynn E.; Nelson, Michael T.; Garwood, Michael

    2015-01-01

    Purpose To report the results of sweep imaging with Fourier transformation (SWIFT) magnetic resonance (MR) imaging for diagnostic breast imaging. Materials and Methods Informed consent was obtained from all participants under one of two institutional review board–approved, HIPAA-compliant protocols. Twelve female patients (age range, 19–54 years; mean age, 41.2 years) and eight normal control subjects (age range, 22–56 years; mean age, 43.2 years) enrolled and completed the study from January 28, 2011, to March 5, 2013. Patients had previous lesions that were Breast Imaging Reporting and Data System 4 and 5 based on mammography and/or ultrasonographic imaging. Contrast-enhanced SWIFT imaging was completed by using a 4-T research MR imaging system. Noncontrast studies were completed in the normal control subjects. One of two sized single-breast SWIFT-compatible transceiver coils was used for nine patients and five controls. Three patients and five control subjects used a SWIFT-compatible dual breast coil. Temporal resolution was 5.9–7.5 seconds. Spatial resolution was 1.00 mm isotropic, with later examinations at 0.67 mm isotropic, and dual breast at 1.00 mm or 0.75 mm isotropic resolution. Results Two nonblinded breast radiologists reported SWIFT image findings of normal breast tissue, benign fibroadenomas (six of six lesions), and malignant lesions (10 of 12 lesions) concordant with other imaging modalities and pathologic reports. Two lesions in two patients were not visualized because of coil field of view. The images yielded by SWIFT showed the presence and extent of known breast lesions. Conclusion The SWIFT technique could become an important addition to breast imaging modalities because it provides high spatial resolution at all points during the dynamic contrast-enhanced examination. © RSNA, 2014 PMID:25247405

  18. Neurosurgical robotic arm drilling navigation system.

    PubMed

    Lin, Chung-Chih; Lin, Hsin-Cheng; Lee, Wen-Yo; Lee, Shih-Tseng; Wu, Chieh-Tsai

    2017-09-01

    The aim of this work was to develop a neurosurgical robotic arm drilling navigation system that provides assistance throughout the complete bone drilling process. The system comprised neurosurgical robotic arm navigation combining robotic and surgical navigation, 3D medical imaging based surgical planning that could identify lesion location and plan the surgical path on 3D images, and automatic bone drilling control that would stop drilling when the bone was to be drilled-through. Three kinds of experiment were designed. The average positioning error deduced from 3D images of the robotic arm was 0.502 ± 0.069 mm. The correlation between automatically and manually planned paths was 0.975. The average distance error between automatically planned paths and risky zones was 0.279 ± 0.401 mm. The drilling auto-stopping algorithm had 0.00% unstopped cases (26.32% in control group 1) and 70.53% non-drilled-through cases (8.42% and 4.21% in control groups 1 and 2). The system may be useful for neurosurgical robotic arm drilling navigation. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Wireless Command-and-Control of UAV-Based Imaging LANs

    NASA Technical Reports Server (NTRS)

    Herwitz, Stanley; Dunagan, S. E.; Sullivan, D. V.; Slye, R. E.; Leung, J. G.; Johnson, L. F.

    2006-01-01

    Dual airborne imaging system networks were operated using a wireless line-of-sight telemetry system developed as part of a 2002 unmanned aerial vehicle (UAV) imaging mission over the USA s largest coffee plantation on the Hawaiian island of Kauai. A primary mission objective was the evaluation of commercial-off-the-shelf (COTS) 802.11b wireless technology for reduction of payload telemetry costs associated with UAV remote sensing missions. Predeployment tests with a conventional aircraft demonstrated successful wireless broadband connectivity between a rapidly moving airborne imaging local area network (LAN) and a fixed ground station LAN. Subsequently, two separate LANs with imaging payloads, packaged in exterior-mounted pressure pods attached to the underwing of NASA's Pathfinder-Plus UAV, were operated wirelessly by ground-based LANs over independent Ethernet bridges. Digital images were downlinked from the solar-powered aircraft at data rates of 2-6 megabits per second (Mbps) over a range of 6.5 9.5 km. An integrated wide area network enabled payload monitoring and control through the Internet from a range of ca. 4000 km during parts of the mission. The recent advent of 802.11g technology is expected to boost the system data rate by about a factor of five.

  20. Automated Adaptive Brightness in Wireless Capsule Endoscopy Using Image Segmentation and Sigmoid Function.

    PubMed

    Shrestha, Ravi; Mohammed, Shahed K; Hasan, Md Mehedi; Zhang, Xuechao; Wahid, Khan A

    2016-08-01

    Wireless capsule endoscopy (WCE) plays an important role in the diagnosis of gastrointestinal (GI) diseases by capturing images of human small intestine. Accurate diagnosis of endoscopic images depends heavily on the quality of captured images. Along with image and frame rate, brightness of the image is an important parameter that influences the image quality which leads to the design of an efficient illumination system. Such design involves the choice and placement of proper light source and its ability to illuminate GI surface with proper brightness. Light emitting diodes (LEDs) are normally used as sources where modulated pulses are used to control LED's brightness. In practice, instances like under- and over-illumination are very common in WCE, where the former provides dark images and the later provides bright images with high power consumption. In this paper, we propose a low-power and efficient illumination system that is based on an automated brightness algorithm. The scheme is adaptive in nature, i.e., the brightness level is controlled automatically in real-time while the images are being captured. The captured images are segmented into four equal regions and the brightness level of each region is calculated. Then an adaptive sigmoid function is used to find the optimized brightness level and accordingly a new value of duty cycle of the modulated pulse is generated to capture future images. The algorithm is fully implemented in a capsule prototype and tested with endoscopic images. Commercial capsules like Pillcam and Mirocam were also used in the experiment. The results show that the proposed algorithm works well in controlling the brightness level accordingly to the environmental condition, and as a result, good quality images are captured with an average of 40% brightness level that saves power consumption of the capsule.

  1. Design of a temperature control system using incremental PID algorithm for a special homemade shortwave infrared spatial remote sensor based on FPGA

    NASA Astrophysics Data System (ADS)

    Xu, Zhipeng; Wei, Jun; Li, Jianwei; Zhou, Qianting

    2010-11-01

    An image spectrometer of a spatial remote sensing satellite requires shortwave band range from 2.1μm to 3μm which is one of the most important bands in remote sensing. We designed an infrared sub-system of the image spectrometer using a homemade 640x1 InGaAs shortwave infrared sensor working on FPA system which requires high uniformity and low level of dark current. The working temperature should be -15+/-0.2 Degree Celsius. This paper studies the model of noise for focal plane array (FPA) system, investigated the relationship with temperature and dark current noise, and adopts Incremental PID algorithm to generate PWM wave in order to control the temperature of the sensor. There are four modules compose of the FPGA module design. All of the modules are coded by VHDL and implemented in FPGA device APA300. Experiment shows the intelligent temperature control system succeeds in controlling the temperature of the sensor.

  2. Multipurpose Hyperspectral Imaging System

    NASA Technical Reports Server (NTRS)

    Mao, Chengye; Smith, David; Lanoue, Mark A.; Poole, Gavin H.; Heitschmidt, Jerry; Martinez, Luis; Windham, William A.; Lawrence, Kurt C.; Park, Bosoon

    2005-01-01

    A hyperspectral imaging system of high spectral and spatial resolution that incorporates several innovative features has been developed to incorporate a focal plane scanner (U.S. Patent 6,166,373). This feature enables the system to be used for both airborne/spaceborne and laboratory hyperspectral imaging with or without relative movement of the imaging system, and it can be used to scan a target of any size as long as the target can be imaged at the focal plane; for example, automated inspection of food items and identification of single-celled organisms. The spectral resolution of this system is greater than that of prior terrestrial multispectral imaging systems. Moreover, unlike prior high-spectral resolution airborne and spaceborne hyperspectral imaging systems, this system does not rely on relative movement of the target and the imaging system to sweep an imaging line across a scene. This compact system (see figure) consists of a front objective mounted at a translation stage with a motorized actuator, and a line-slit imaging spectrograph mounted within a rotary assembly with a rear adaptor to a charged-coupled-device (CCD) camera. Push-broom scanning is carried out by the motorized actuator which can be controlled either manually by an operator or automatically by a computer to drive the line-slit across an image at a focal plane of the front objective. To reduce the cost, the system has been designed to integrate as many as possible off-the-shelf components including the CCD camera and spectrograph. The system has achieved high spectral and spatial resolutions by using a high-quality CCD camera, spectrograph, and front objective lens. Fixtures for attachment of the system to a microscope (U.S. Patent 6,495,818 B1) make it possible to acquire multispectral images of single cells and other microscopic objects.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Fan; Wang, Yuanqing, E-mail: yqwang@nju.edu.cn; Li, Fenfang

    The avalanche-photodiode-array (APD-array) laser detection and ranging (LADAR) system has been continually developed owing to its superiority of nonscanning, large field of view, high sensitivity, and high precision. However, how to achieve higher-efficient detection and better integration of the LADAR system for real-time three-dimensional (3D) imaging continues to be a problem. In this study, a novel LADAR system using four linear mode APDs (LmAPDs) is developed for high-efficient detection by adopting a modulation and multiplexing technique. Furthermore, an automatic control system for the array LADAR system is proposed and designed by applying the virtual instrumentation technique. The control system aimsmore » to achieve four functions: synchronization of laser emission and rotating platform, multi-channel synchronous data acquisition, real-time Ethernet upper monitoring, and real-time signal processing and 3D visualization. The structure and principle of the complete system are described in the paper. The experimental results demonstrate that the LADAR system is capable of achieving real-time 3D imaging on an omnidirectional rotating platform under the control of the virtual instrumentation system. The automatic imaging LADAR system utilized only 4 LmAPDs to achieve 256-pixel-per-frame detection with by employing 64-bit demodulator. Moreover, the lateral resolution is ∼15 cm and range accuracy is ∼4 cm root-mean-square error at a distance of ∼40 m.« less

  4. Acousto-optic laser projection systems for displaying TV information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gulyaev, Yu V; Kazaryan, M A; Mokrushin, Yu M

    2015-04-30

    This review addresses various approaches to television projection imaging on large screens using lasers. Results are presented of theoretical and experimental studies of an acousto-optic projection system operating on the principle of projecting an image of an entire amplitude-modulated television line in a single laser pulse. We consider characteristic features of image formation in such a system and the requirements for its individual components. Particular attention is paid to nonlinear distortions of the image signal, which show up most severely at low modulation signal frequencies. We discuss the feasibility of improving the process efficiency and image quality using acousto-optic modulatorsmore » and pulsed lasers. Real-time projectors with pulsed line imaging can be used for controlling high-intensity laser radiation. (review)« less

  5. Realtime control of multiple-focus phased array heating patterns based on noninvasive ultrasound thermography.

    PubMed

    Casper, Andrew; Liu, Dalong; Ebbini, Emad S

    2012-01-01

    A system for the realtime generation and control of multiple-focus ultrasound phased-array heating patterns is presented. The system employs a 1-MHz, 64-element array and driving electronics capable of fine spatial and temporal control of the heating pattern. The driver is integrated with a realtime 2-D temperature imaging system implemented on a commercial scanner. The coordinates of the temperature control points are defined on B-mode guidance images from the scanner, together with the temperature set points and controller parameters. The temperature at each point is controlled by an independent proportional, integral, and derivative controller that determines the focal intensity at that point. Optimal multiple-focus synthesis is applied to generate the desired heating pattern at the control points. The controller dynamically reallocates the power available among the foci from the shared power supply upon reaching the desired temperature at each control point. Furthermore, anti-windup compensation is implemented at each control point to improve the system dynamics. In vitro experiments in tissue-mimicking phantom demonstrate the robustness of the controllers for short (2-5 s) and longer multiple-focus high-intensity focused ultrasound exposures. Thermocouple measurements in the vicinity of the control points confirm the dynamics of the temperature variations obtained through noninvasive feedback. © 2011 IEEE

  6. Image Display and Manipulation System (IDAMS) program documentation, Appendixes A-D. [including routines, convolution filtering, image expansion, and fast Fourier transformation

    NASA Technical Reports Server (NTRS)

    Cecil, R. W.; White, R. A.; Szczur, M. R.

    1972-01-01

    The IDAMS Processor is a package of task routines and support software that performs convolution filtering, image expansion, fast Fourier transformation, and other operations on a digital image tape. A unique task control card for that program, together with any necessary parameter cards, selects each processing technique to be applied to the input image. A variable number of tasks can be selected for execution by including the proper task and parameter cards in the input deck. An executive maintains control of the run; it initiates execution of each task in turn and handles any necessary error processing.

  7. Control of the positional relationship between a sample collection instrument and a surface to be analyzed during a sampling procedure with image analysis

    DOEpatents

    Van Berkel, Gary J.; Kertesz, Vilmos

    2011-08-09

    A system and method utilizes an image analysis approach for controlling the collection instrument-to-surface distance in a sampling system for use, for example, with mass spectrometric detection. Such an approach involves the capturing of an image of the collection instrument or the shadow thereof cast across the surface and the utilization of line average brightness (LAB) techniques to determine the actual distance between the collection instrument and the surface. The actual distance is subsequently compared to a target distance for re-optimization, as necessary, of the collection instrument-to-surface during an automated surface sampling operation.

  8. Computed tomography automatic exposure control techniques in 18F-FDG oncology PET-CT scanning.

    PubMed

    Iball, Gareth R; Tout, Deborah

    2014-04-01

    Computed tomography (CT) automatic exposure control (AEC) systems are now used in all modern PET-CT scanners. A collaborative study was undertaken to compare AEC techniques of the three major PET-CT manufacturers for fluorine-18 fluorodeoxyglucose half-body oncology imaging. An audit of 70 patients was performed for half-body CT scans taken on a GE Discovery 690, Philips Gemini TF and Siemens Biograph mCT (all 64-slice CT). Patient demographic and dose information was recorded and image noise was calculated as the SD of Hounsfield units in the liver. A direct comparison of the AEC systems was made by scanning a Rando phantom on all three systems for a range of AEC settings. The variation in dose and image quality with patient weight was significantly different for all three systems, with the GE system showing the largest variation in dose with weight and Philips the least. Image noise varied with patient weight in Philips and Siemens systems but was constant for all weights in GE. The z-axis mA profiles from the Rando phantom demonstrate that these differences are caused by the nature of the tube current modulation techniques applied. The mA profiles varied considerably according to the AEC settings used. CT AEC techniques from the three manufacturers yield significantly different tube current modulation patterns and hence deliver different doses and levels of image quality across a range of patient weights. Users should be aware of how their system works and of steps that could be taken to optimize imaging protocols.

  9. Efficient space-time sampling with pixel-wise coded exposure for high-speed imaging.

    PubMed

    Liu, Dengyu; Gu, Jinwei; Hitomi, Yasunobu; Gupta, Mohit; Mitsunaga, Tomoo; Nayar, Shree K

    2014-02-01

    Cameras face a fundamental trade-off between spatial and temporal resolution. Digital still cameras can capture images with high spatial resolution, but most high-speed video cameras have relatively low spatial resolution. It is hard to overcome this trade-off without incurring a significant increase in hardware costs. In this paper, we propose techniques for sampling, representing, and reconstructing the space-time volume to overcome this trade-off. Our approach has two important distinctions compared to previous works: 1) We achieve sparse representation of videos by learning an overcomplete dictionary on video patches, and 2) we adhere to practical hardware constraints on sampling schemes imposed by architectures of current image sensors, which means that our sampling function can be implemented on CMOS image sensors with modified control units in the future. We evaluate components of our approach, sampling function and sparse representation, by comparing them to several existing approaches. We also implement a prototype imaging system with pixel-wise coded exposure control using a liquid crystal on silicon device. System characteristics such as field of view and modulation transfer function are evaluated for our imaging system. Both simulations and experiments on a wide range of scenes show that our method can effectively reconstruct a video from a single coded image while maintaining high spatial resolution.

  10. PointCom: semi-autonomous UGV control with intuitive interface

    NASA Astrophysics Data System (ADS)

    Rohde, Mitchell M.; Perlin, Victor E.; Iagnemma, Karl D.; Lupa, Robert M.; Rohde, Steven M.; Overholt, James; Fiorani, Graham

    2008-04-01

    Unmanned ground vehicles (UGVs) will play an important role in the nation's next-generation ground force. Advances in sensing, control, and computing have enabled a new generation of technologies that bridge the gap between manual UGV teleoperation and full autonomy. In this paper, we present current research on a unique command and control system for UGVs named PointCom (Point-and-Go Command). PointCom is a semi-autonomous command system for one or multiple UGVs. The system, when complete, will be easy to operate and will enable significant reduction in operator workload by utilizing an intuitive image-based control framework for UGV navigation and allowing a single operator to command multiple UGVs. The project leverages new image processing algorithms for monocular visual servoing and odometry to yield a unique, high-performance fused navigation system. Human Computer Interface (HCI) techniques from the entertainment software industry are being used to develop video-game style interfaces that require little training and build upon the navigation capabilities. By combining an advanced navigation system with an intuitive interface, a semi-autonomous control and navigation system is being created that is robust, user friendly, and less burdensome than many current generation systems. mand).

  11. Photoacoustic image-guided navigation system for surgery (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Park, Sara; Jang, Jongseong; Kim, Jeesu; Kim, Young Soo; Kim, Chulhong

    2017-03-01

    Identifying and delineating invisible anatomical and pathological details during surgery guides surgical procedures in real time. Various intraoperative imaging modalities have been increasingly employed to minimize such surgical risks as anatomical changes, damage to normal tissues, and human error. However, current methods provide only structural information, which cannot identify critical structures such as blood vessels. The logical next step is an intraoperative imaging modality that can provide functional information. Here, we have successfully developed a photoacoustic (PA) image-guided navigation system for surgery by integrating a position tracking system and a real-time clinical photoacoustic/ultrasound (PA/US) imaging system. PA/US images were acquired in real time and overlaid on pre-acquired cross-sectional magnetic resonance (MR) images. In the overlaid images, PA images represent the optical absorption characteristics of the surgical field, while US and MR images represent the morphological structure of surrounding tissues. To test the feasibility of the system, we prepared a tissue mimicking phantom which contained two samples, methylene blue as a contrast agent and water as a control. We acquired real-time overlaid PA/US/MR images of the phantom, which were well-matched with the optical and morphological properties of the samples. The developed system is the first approach to a novel intraoperative imaging technology based on PA imaging, and we believe that the system can be utilized in various surgical environments in the near future, improving the efficacy of surgical guidance.

  12. System of radiographic control or an imaging system for personal radiographic inspection

    NASA Astrophysics Data System (ADS)

    Babichev, E. A.; Baru, S. E.; Neustroev, V. A.; Leonov, V. V.; Porosev, V. V.; Savinov, G. A.; Ukraintsev, Yu. G.

    2004-06-01

    The security system of personal radiographic inspection for detection of explosive materials and plastic weapons was developed in BINP recently. Basic system parameters are: maximum scanning height— 2000 mm, image width— 800 mm, number of detector channels—768, channel size— 1.05×1 mm, charge collecting time for one line—2, 5 ms, scanning speed— 40 cm/s, maximum scanning time— 5 s, radiation dose per one inspection <5 μSv. The detector is a multichannel ionization Xe chamber. The image of inspected person will appear on the display just after scanning. The pilot sample of this system was put into operation in March, 2003.b

  13. Low Vision Enhancement System

    NASA Technical Reports Server (NTRS)

    1995-01-01

    NASA's Technology Transfer Office at Stennis Space Center worked with the Johns Hopkins Wilmer Eye Institute in Baltimore, Md., to incorporate NASA software originally developed by NASA to process satellite images into the Low Vision Enhancement System (LVES). The LVES, referred to as 'ELVIS' by its users, is a portable image processing system that could make it possible to improve a person's vision by enhancing and altering images to compensate for impaired eyesight. The system consists of two orientation cameras, a zoom camera, and a video projection system. The headset and hand-held control weigh about two pounds each. Pictured is Jacob Webb, the first Mississippian to use the LVES.

  14. Real-time optical fiber digital speckle pattern interferometry for industrial applications

    NASA Astrophysics Data System (ADS)

    Chan, Robert K.; Cheung, Y. M.; Lo, C. H.; Tam, T. K.

    1997-03-01

    There is current interest, especially in the industrial sector, to use the digital speckle pattern interferometry (DSPI) technique to measure surface stress. Indeed, many publications in the subject are evident of the growing interests in the field. However, to bring the technology to industrial use requires the integration of several emerging technologies, viz. optics, feedback control, electronics, imaging processing and digital signal processing. Due to the highly interdisciplinary nature of the technique, successful implementation and development require expertise in all of the fields. At Baptist University, under the funding of a major industrial grant, we are developing the technology for the industrial sector. Our system fully exploits optical fibers and diode lasers in the design to enable practical and rugged systems suited for industrial applications. Besides the development in optics, we have broken away from the reliance of a microcomputer PC platform for both image capture and processing, and have developed a digital signal processing array system that can handle simultaneous and independent image capture/processing with feedback control. The system, named CASPA for 'cascadable architecture signal processing array,' is a third generation development system that utilizes up to 7 digital signal processors has proved to be a very powerful system. With our CASPA we are now in a better position to developing novel optical measurement systems for industrial application that may require different measurement systems to operate concurrently and requiring information exchange between the systems. Applications in mind such as simultaneous in-plane and out-of-plane DSPI image capture/process, vibrational analysis with interactive DSPI and phase shifting control of optical systems are a few good examples of the potentials.

  15. CATAVIÑA: new infrared camera for OAN-SPM

    NASA Astrophysics Data System (ADS)

    Iriarte, Arturo; Cruz-González, Irene; Martínez, Luis A.; Tinoco, Silvio; Lara, Gerardo; Ruiz, Elfego; Sohn, Erika; Bernal, Abel; Angeles, Fernando; Moreno, Arturo; Murillo, Francisco; Langarica, Rosalía; Luna, Esteban; Salas, Luis; Cajero, Vicente

    2006-06-01

    CATAVIÑA is a near-infrared camera system to be operated in conjunction with the existing multi-purpose nearinfrared optical bench "CAMALEON" in OAN-SPM. Observing modes include direct imaging, spectroscopy, Fabry- Perot interferometry and polarimetry. This contribution focuses on the optomechanics and detector controller description of CATAVIÑA, which is planned to start operating later in 2006. The camera consists of an 8 inch LN2 dewar containing a 10 filter carousel, a radiation baffle and the detector circuit board mount. The system is based on a Rockwell 1024x1024 HgCdTe (HAWAII-I) FPA, operating in the 1 to 2.5 micron window. The detector controller/readout system was designed and developed at UNAM Instituto de Astronomia. It is based on five Texas Instruments DSK digital signal processor (DSP) modules. One module generates the detector and ADC-system control, while the remaining four are in charge of the acquisition of each of the detector's quadrants. Each DSP has a built-in expanded memory module in order to store more than one image. The detector read-out and signal driver subsystems are mounted onto the dewar in a "back-pack" fashion, each containing four independent pre-amplifiers, converters and signal drivers, that communicate through fiber optics with their respective DSPs. This system has the possibility of programming the offset input voltage and converter gain. The controller software architecture is based on a client/server model. The client sends commands through the TCP/IP protocol and acquires the image. The server consists of a microcomputer with an embedded Linux operating system, which runs the main program that receives the user commands and interacts with the timing and acquisition DSPs. The observer's interface allows for several readout and image processing modes.

  16. 3D Cryo-Imaging: A Very High-Resolution View of the Whole Mouse

    PubMed Central

    Roy, Debashish; Steyer, Grant J.; Gargesha, Madhusudhana; Stone, Meredith E.; Wilson, David L.

    2009-01-01

    We developed the Case Cryo-imaging system that provides information rich, very high-resolution, color brightfield, and molecular fluorescence images of a whole mouse using a section-and-image block-face imaging technology. The system consists of a mouse-sized, motorized cryo-microtome with special features for imaging, a modified, brightfield/ fluorescence microscope, and a robotic xyz imaging system positioner, all of which is fully automated by a control system. Using the robotic system, we acquired microscopic tiled images at a pixel size of 15.6 µm over the block face of a whole mouse sectioned at 40 µm, with a total data volume of 55 GB. Viewing 2D images at multiple resolutions, we identified small structures such as cardiac vessels, muscle layers, villi of the small intestine, the optic nerve, and layers of the eye. Cryo-imaging was also suitable for imaging embryo mutants in 3D. A mouse, in which enhanced green fluorescent protein was expressed under gamma actin promoter in smooth muscle cells, gave clear 3D views of smooth muscle in the urogenital and gastrointestinal tracts. With cryo-imaging, we could obtain 3D vasculature down to 10 µm, over very large regions of mouse brain. Software is fully automated with fully programmable imaging/sectioning protocols, email notifications, and automatic volume visualization. With a unique combination of field-of-view, depth of field, contrast, and resolution, the Case Cryo-imaging system fills the gap between whole animal in vivo imaging and histology. PMID:19248166

  17. PSF estimation for defocus blurred image based on quantum back-propagation neural network

    NASA Astrophysics Data System (ADS)

    Gao, Kun; Zhang, Yan; Shao, Xiao-guang; Liu, Ying-hui; Ni, Guoqiang

    2010-11-01

    Images obtained by an aberration-free system are defocused blur due to motion in depth and/or zooming. The precondition of restoring the degraded image is to estimate point spread function (PSF) of the imaging system as precisely as possible. But it is difficult to identify the analytic model of PSF precisely due to the complexity of the degradation process. Inspired by the similarity between the quantum process and imaging process in the probability and statistics fields, one reformed multilayer quantum neural network (QNN) is proposed to estimate PSF of the defocus blurred image. Different from the conventional artificial neural network (ANN), an improved quantum neuron model is used in the hidden layer instead, which introduces a 2-bit controlled NOT quantum gate to control output and adopts 2 texture and edge features as the input vectors. The supervised back-propagation learning rule is adopted to train network based on training sets from the historical images. Test results show that this method owns excellent features of high precision and strong generalization ability.

  18. Light-leaking region segmentation of FOG fiber based on quality evaluation of infrared image

    NASA Astrophysics Data System (ADS)

    Liu, Haoting; Wang, Wei; Gao, Feng; Shan, Lianjie; Ma, Yuzhou; Ge, Wenqian

    2014-07-01

    To improve the assembly reliability of Fiber Optic Gyroscope (FOG), a light leakage detection system and method is developed. First, an agile movement control platform is designed to implement the pose control of FOG optical path component in 6 Degrees of Freedom (DOF). Second, an infrared camera is employed to capture the working state images of corresponding fibers in optical path component after the manual assembly of FOG; therefore the entire light transmission process of key sections in light-path can be recorded. Third, an image quality evaluation based region segmentation method is developed for the light leakage images. In contrast to the traditional methods, the image quality metrics, including the region contrast, the edge blur, and the image noise level, are firstly considered to distinguish the image characters of infrared image; then the robust segmentation algorithms, including graph cut and flood fill, are all developed for region segmentation according to the specific image quality. Finally, after the image segmentation of light leakage region, the typical light-leaking type, such as the point defect, the wedge defect, and the surface defect can be identified. By using the image quality based method, the applicability of our proposed system can be improved dramatically. Many experiment results have proved the validity and effectiveness of this method.

  19. Development and Integration of Control System Models

    NASA Technical Reports Server (NTRS)

    Kim, Young K.

    1998-01-01

    The computer simulation tool, TREETOPS, has been upgraded and used at NASA/MSFC to model various complicated mechanical systems and to perform their dynamics and control analysis with pointing control systems. A TREETOPS model of Advanced X-ray Astrophysics Facility - Imaging (AXAF-1) dynamics and control system was developed to evaluate the AXAF-I pointing performance for Normal Pointing Mode. An optical model of Shooting Star Experiment (SSE) was also developed and its optical performance analysis was done using the MACOS software.

  20. Two improved coherent optical feedback systems for optical information processing

    NASA Technical Reports Server (NTRS)

    Lee, S. H.; Bartholomew, B.; Cederquist, J.

    1976-01-01

    Coherent optical feedback systems are Fabry-Perot interferometers modified to perform optical information processing. Two new systems based on plane parallel and confocal Fabry-Perot interferometers are introduced. The plane parallel system can be used for contrast control, intensity level selection, and image thresholding. The confocal system can be used for image restoration and solving partial differential equations. These devices are simpler and less expensive than previous systems. Experimental results are presented to demonstrate their potential for optical information processing.

  1. Applications of artificial intelligence V; Proceedings of the Meeting, Orlando, FL, May 18-20, 1987

    NASA Technical Reports Server (NTRS)

    Gilmore, John F. (Editor)

    1987-01-01

    The papers contained in this volume focus on current trends in applications of artificial intelligence. Topics discussed include expert systems, image understanding, artificial intelligence tools, knowledge-based systems, heuristic systems, manufacturing applications, and image analysis. Papers are presented on expert system issues in automated, autonomous space vehicle rendezvous; traditional versus rule-based programming techniques; applications to the control of optional flight information; methodology for evaluating knowledge-based systems; and real-time advisory system for airborne early warning.

  2. Integrated Real-Time Control and Imaging System for Microbiorobotics and Nanobiostructures

    DTIC Science & Technology

    2016-01-11

    kit with a control board and ALP 4.1 basic controller suite. The digital micromirror device is the highest resolution 16:9 aspect ratio system. This...in Figure 1, consisted of the following: (1) digital micromirror device (DMD) and controller, (2) an inverted epifluorescence microscope with a flat...accompanying control board and ALP 4.1 basic controller suite. The digital micromirror device is currently the highest commercially available

  3. 3D GeoWall Analysis System for Shuttle External Tank Foreign Object Debris Events

    NASA Technical Reports Server (NTRS)

    Brown, Richard; Navard, Andrew; Spruce, Joseph

    2010-01-01

    An analytical, advanced imaging method has been developed for the initial monitoring and identification of foam debris and similar anomalies that occur post-launch in reference to the space shuttle s external tank (ET). Remote sensing technologies have been used to perform image enhancement and analysis on high-resolution, true-color images collected with the DCS 760 Kodak digital camera located in the right umbilical well of the space shuttle. Improvements to the camera, using filters, have added sharpness/definition to the image sets; however, image review/analysis of the ET has been limited by the fact that the images acquired by umbilical cameras during launch are two-dimensional, and are usually nonreferenceable between frames due to rotation translation of the ET as it falls away from the space shuttle. Use of stereo pairs of these images can enable strong visual indicators that can immediately portray depth perception of damaged areas or movement of fragments between frames is not perceivable in two-dimensional images. A stereoscopic image visualization system has been developed to allow 3D depth perception of stereo-aligned image pairs taken from in-flight umbilical and handheld digital shuttle cameras. This new system has been developed to augment and optimize existing 2D monitoring capabilities. Using this system, candidate sequential image pairs are identified for transformation into stereo viewing pairs. Image orientation is corrected using control points (similar points) between frames to place the two images in proper X-Y viewing perspective. The images are then imported into the WallView stereo viewing software package. The collected control points are used to generate a transformation equation that is used to re-project one image and effectively co-register it to the other image. The co-registered, oriented image pairs are imported into a WallView image set and are used as a 3D stereo analysis slide show. Multiple sequential image pairs can be used to allow forensic review of temporal phenomena between pairs. The observer, while wearing linear polarized glasses, is able to review image pairs in passive 3D stereo.

  4. A closed-loop control-loading system

    NASA Technical Reports Server (NTRS)

    Ashworth, B. R.; Parrish, R. V.

    1979-01-01

    Langley Differential Maneuvering Simulator (DMS) realistically simulates two aircraft operating in differential mode. It consists of two identical fixed-base cockpits and dome projection systems. Each projection system consists of sky/Earth projector and target-image generator and projector. Although programmable control forces are small part of overall system, they play large role in providing pilot with kinesthetic cues.

  5. Compensation for Unconstrained Catheter Shaft Motion in Cardiac Catheters

    PubMed Central

    Degirmenci, Alperen; Loschak, Paul M.; Tschabrunn, Cory M.; Anter, Elad; Howe, Robert D.

    2016-01-01

    Cardiac catheterization with ultrasound (US) imaging catheters provides real time US imaging from within the heart, but manually navigating a four degree of freedom (DOF) imaging catheter is difficult and requires extensive training. Existing work has demonstrated robotic catheter steering in constrained bench top environments. Closed-loop control in an unconstrained setting, such as patient vasculature, remains a significant challenge due to friction, backlash, and physiological disturbances. In this paper we present a new method for closed-loop control of the catheter tip that can accurately and robustly steer 4-DOF cardiac catheters and other flexible manipulators despite these effects. The performance of the system is demonstrated in a vasculature phantom and an in vivo porcine animal model. During bench top studies the robotic system converged to the desired US imager pose with sub-millimeter and sub-degree-level accuracy. During animal trials the system achieved 2.0 mm and 0.65° accuracy. Accurate and robust robotic navigation of flexible manipulators will enable enhanced visualization and treatment during procedures. PMID:27525170

  6. Towards brain-activity-controlled information retrieval: Decoding image relevance from MEG signals.

    PubMed

    Kauppi, Jukka-Pekka; Kandemir, Melih; Saarinen, Veli-Matti; Hirvenkari, Lotta; Parkkonen, Lauri; Klami, Arto; Hari, Riitta; Kaski, Samuel

    2015-05-15

    We hypothesize that brain activity can be used to control future information retrieval systems. To this end, we conducted a feasibility study on predicting the relevance of visual objects from brain activity. We analyze both magnetoencephalographic (MEG) and gaze signals from nine subjects who were viewing image collages, a subset of which was relevant to a predetermined task. We report three findings: i) the relevance of an image a subject looks at can be decoded from MEG signals with performance significantly better than chance, ii) fusion of gaze-based and MEG-based classifiers significantly improves the prediction performance compared to using either signal alone, and iii) non-linear classification of the MEG signals using Gaussian process classifiers outperforms linear classification. These findings break new ground for building brain-activity-based interactive image retrieval systems, as well as for systems utilizing feedback both from brain activity and eye movements. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  8. The development of a multifunction lens test instrument by using computer aided variable test patterns

    NASA Astrophysics Data System (ADS)

    Chen, Chun-Jen; Wu, Wen-Hong; Huang, Kuo-Cheng

    2009-08-01

    A multi-function lens test instrument is report in this paper. This system can evaluate the image resolution, image quality, depth of field, image distortion and light intensity distribution of the tested lens by changing the tested patterns. This system consists of a tested lens, a CCD camera, a linear motorized stage, a system fixture, an observer LCD monitor, and a notebook for pattern providing. The LCD monitor displays a serious of specified tested patterns sent by the notebook. Then each displayed pattern goes through the tested lens and images in the CCD camera sensor. Consequently, the system can evaluate the performance of the tested lens by analyzing the image of CCD camera with special designed software. The major advantage of this system is that it can complete whole test quickly without interruption due to part replacement, because the tested patterns are statically displayed on monitor and controlled by the notebook.

  9. Sensor fusion V; Proceedings of the Meeting, Boston, MA, Nov. 15-17, 1992

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1992-01-01

    Topics addressed include 3D object perception, human-machine interface in multisensor systems, sensor fusion architecture, fusion of multiple and distributed sensors, interface and decision models for sensor fusion, computational networks, simple sensing for complex action, multisensor-based control, and metrology and calibration of multisensor systems. Particular attention is given to controlling 3D objects by sketching 2D views, the graphical simulation and animation environment for flexible structure robots, designing robotic systems from sensorimotor modules, cylindrical object reconstruction from a sequence of images, an accurate estimation of surface properties by integrating information using Bayesian networks, an adaptive fusion model for a distributed detection system, multiple concurrent object descriptions in support of autonomous navigation, robot control with multiple sensors and heuristic knowledge, and optical array detectors for image sensors calibration. (No individual items are abstracted in this volume)

  10. Design and realization of an active SAR calibrator for TerraSAR-X

    NASA Astrophysics Data System (ADS)

    Dummer, Georg; Lenz, Rainer; Lutz, Benjamin; Kühl, Markus; Müller-Glaser, Klaus D.; Wiesbeck, Werner

    2005-10-01

    TerraSAR-X is a new earth observing satellite which will be launched in spring 2006. It carries a high resolution X-band SAR sensor. For high image data quality, accurate ground calibration targets are necessary. This paper describes a novel system concept for an active and highly integrated, digitally controlled SAR system calibrator. A total of 16 active transponder and receiver systems and 17 receiver only systems will be fabricated for a calibration campaign. The calibration units serve for absolute radiometric calibration of the SAR image data. Additionally, they are equipped with an extra receiver path for two dimensional satellite antenna pattern recognition. The calibrator is controlled by a dedicated digital Electronic Control Unit (ECU). The different voltages needed by the calibrator and the ECU are provided by the third main unit called Power Management Unit (PMU).

  11. Web surveillance system using platform-based design

    NASA Astrophysics Data System (ADS)

    Lin, Shin-Yo; Tsai, Tsung-Han

    2004-04-01

    A revolutionary methodology of SOPC platform-based design environment for multimedia communications will be developed. We embed a softcore processor to perform the image compression in FPGA. Then, we plug-in an Ethernet daughter board in the SOPC development platform system. Afterward, a web surveillance platform system is presented. The web surveillance system consists of three parts: image capture, web server and JPEG compression. In this architecture, user can control the surveillance system by remote. By the IP address configures to Ethernet daughter board, the user can access the surveillance system via browser. When user access the surveillance system, the CMOS sensor presently capture the remote image. After that, it will feed the captured image with the embedded processor. The embedded processor immediately performs the JPEG compression. Afterward, the user receives the compressed data via Ethernet. To sum up of the above mentioned, the all system will be implemented on APEX20K200E484-2X device.

  12. Control Method for Video Guidance Sensor System

    NASA Technical Reports Server (NTRS)

    Howard, Richard T. (Inventor); Book, Michael L. (Inventor); Bryan, Thomas C. (Inventor)

    2005-01-01

    A method is provided for controlling operations in a video guidance sensor system wherein images of laser output signals transmitted by the system and returned from a target are captured and processed by the system to produce data used in tracking of the target. Six modes of operation are provided as follows: (i) a reset mode; (ii) a diagnostic mode; (iii) a standby mode; (iv) an acquisition mode; (v) a tracking mode; and (vi) a spot mode wherein captured images of returned laser signals are processed to produce data for all spots found in the image. The method provides for automatic transition to the standby mode from the reset mode after integrity checks are performed and from the diagnostic mode to the reset mode after diagnostic operations are commands is permitted only when the system is in the carried out. Further, acceptance of reset and diagnostic standby mode. The method also provides for automatic transition from the acquisition mode to the tracking mode when an acceptable target is found.

  13. Model-based wavefront sensorless adaptive optics system for large aberrations and extended objects.

    PubMed

    Yang, Huizhen; Soloviev, Oleg; Verhaegen, Michel

    2015-09-21

    A model-based wavefront sensorless (WFSless) adaptive optics (AO) system with a 61-element deformable mirror is simulated to correct the imaging of a turbulence-degraded extended object. A fast closed-loop control algorithm, which is based on the linear relation between the mean square of the aberration gradients and the second moment of the image intensity distribution, is used to generate the control signals for the actuators of the deformable mirror (DM). The restoration capability and the convergence rate of the AO system are investigated with different turbulence strength wave-front aberrations. Simulation results show the model-based WFSless AO system can restore those images degraded by different turbulence strengths successfully and obtain the correction very close to the achievable capability of the given DM. Compared with the ideal correction of 61-element DM, the averaged relative error of RMS value is 6%. The convergence rate of AO system is independent of the turbulence strength and only depends on the number of actuators of DM.

  14. Control method for video guidance sensor system

    NASA Technical Reports Server (NTRS)

    Howard, Richard T. (Inventor); Book, Michael L. (Inventor); Bryan, Thomas C. (Inventor)

    2005-01-01

    A method is provided for controlling operations in a video guidance sensor system wherein images of laser output signals transmitted by the system and returned from a target are captured and processed by the system to produce data used in tracking of the target. Six modes of operation are provided as follows: (i) a reset mode; (ii) a diagnostic mode; (iii) a standby mode; (iv) an acquisition mode; (v) a tracking mode; and (vi) a spot mode wherein captured images of returned laser signals are processed to produce data for all spots found in the image. The method provides for automatic transition to the standby mode from the reset mode after integrity checks are performed and from the diagnostic mode to the reset mode after diagnostic operations are carried out. Further, acceptance of reset and diagnostic commands is permitted only when the system is in the standby mode. The method also provides for automatic transition from the acquisition mode to the tracking mode when an acceptable target is found.

  15. Holographic zoom system based on spatial light modulator and liquid device

    NASA Astrophysics Data System (ADS)

    Wang, Di; Li, Lei; Liu, Su-Juan; Wang, Qiong-Hua

    2018-02-01

    In this paper, two holographic zoom systems are proposed based on the programmability of spatial light modulator (SLM) and zoom characteristics of liquid lens. An active optical zoom system is proposed in which the zoom module is composed of a liquid lens and an SLM. By controlling the focal lengths of the liquid lens and the encoded digital lens on the SLM, we can change the magnification of an image without mechanical moving parts and keep the output plane stationary. Then a color holographic zoom system based on a liquid lens is proposed. The system processes the color separation of the original object for red, green, and blue components and generated three holograms respectively. A new hologram with specific reconstructed distance can be generated by combing the hologram of the digital lens with the hologram of the image. By controlling the focal lengths of the liquid lens and the encoded digital lens on the SLM, we can change the magnification of the reconstructed image.

  16. A dual-channel fusion system of visual and infrared images based on color transfer

    NASA Astrophysics Data System (ADS)

    Pei, Chuang; Jiang, Xiao-yu; Zhang, Peng-wei; Liang, Hao-cong

    2013-09-01

    A dual-channel fusion system of visual and infrared images based on color transfer The increasing availability and deployment of imaging sensors operating in multiple spectrums has led to a large research effort in image fusion, resulting in a plethora of pixel-level image fusion algorithms. However, most of these algorithms have gray or false color fusion results which are not adapt to human vision. Transfer color from a day-time reference image to get natural color fusion result is an effective way to solve this problem, but the computation cost of color transfer is expensive and can't meet the request of real-time image processing. We developed a dual-channel infrared and visual images fusion system based on TMS320DM642 digital signal processing chip. The system is divided into image acquisition and registration unit, image fusion processing unit, system control unit and image fusion result out-put unit. The image registration of dual-channel images is realized by combining hardware and software methods in the system. False color image fusion algorithm in RGB color space is used to get R-G fused image, then the system chooses a reference image to transfer color to the fusion result. A color lookup table based on statistical properties of images is proposed to solve the complexity computation problem in color transfer. The mapping calculation between the standard lookup table and the improved color lookup table is simple and only once for a fixed scene. The real-time fusion and natural colorization of infrared and visual images are realized by this system. The experimental result shows that the color-transferred images have a natural color perception to human eyes, and can highlight the targets effectively with clear background details. Human observers with this system will be able to interpret the image better and faster, thereby improving situational awareness and reducing target detection time.

  17. Image recombination transform algorithm for superresolution structured illumination microscopy

    PubMed Central

    Zhou, Xing; Lei, Ming; Dan, Dan; Yao, Baoli; Yang, Yanlong; Qian, Jia; Chen, Guangde; Bianco, Piero R.

    2016-01-01

    Abstract. Structured illumination microscopy (SIM) is an attractive choice for fast superresolution imaging. The generation of structured illumination patterns made by interference of laser beams is broadly employed to obtain high modulation depth of patterns, while the polarizations of the laser beams must be elaborately controlled to guarantee the high contrast of interference intensity, which brings a more complex configuration for the polarization control. The emerging pattern projection strategy is much more compact, but the modulation depth of patterns is deteriorated by the optical transfer function of the optical system, especially in high spatial frequency near the diffraction limit. Therefore, the traditional superresolution reconstruction algorithm for interference-based SIM will suffer from many artifacts in the case of projection-based SIM that possesses a low modulation depth. Here, we propose an alternative reconstruction algorithm based on image recombination transform, which provides an alternative solution to address this problem even in a weak modulation depth. We demonstrated the effectiveness of this algorithm in the multicolor superresolution imaging of bovine pulmonary arterial endothelial cells in our developed projection-based SIM system, which applies a computer controlled digital micromirror device for fast fringe generation and multicolor light-emitting diodes for illumination. The merit of the system incorporated with the proposed algorithm allows for a low excitation intensity fluorescence imaging even less than 1  W/cm2, which is beneficial for the long-term, in vivo superresolved imaging of live cells and tissues. PMID:27653935

  18. A smart telerobotic system driven by monocular vision

    NASA Technical Reports Server (NTRS)

    Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.

    1994-01-01

    A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.

  19. Development of an image operation system with a motion sensor in dental radiology.

    PubMed

    Sato, Mitsuru; Ogura, Toshihiro; Yasumoto, Yoshiaki; Kadowaki, Yuta; Hayashi, Norio; Doi, Kunio

    2015-07-01

    During examinations and/or treatment, a dentist in the examination room needs to view images with a proper display system. However, they cannot operate the image display system by hands, because dentists always wear gloves to be kept their hands away from unsanitized materials. Therefore, we developed a new image operating system that uses a motion sensor. We used the Leap motion sensor technique to read the hand movements of a dentist. We programmed the system using C++ to enable various movements of the display system, i.e., click, double click, drag, and drop. Thus, dentists with their gloves on in the examination room can control dental and panoramic images on the image display system intuitively and quickly with movement of their hands only. We investigated the time required with the conventional method using a mouse and with the new method using the finger operation. The average operation time with the finger method was significantly shorter than that with the mouse method. This motion sensor method, with appropriate training for finger movements, can provide a better operating performance than the conventional mouse method.

  20. Image compression-encryption algorithms by combining hyper-chaotic system with discrete fractional random transform

    NASA Astrophysics Data System (ADS)

    Gong, Lihua; Deng, Chengzhi; Pan, Shumin; Zhou, Nanrun

    2018-07-01

    Based on hyper-chaotic system and discrete fractional random transform, an image compression-encryption algorithm is designed. The original image is first transformed into a spectrum by the discrete cosine transform and the resulting spectrum is compressed according to the method of spectrum cutting. The random matrix of the discrete fractional random transform is controlled by a chaotic sequence originated from the high dimensional hyper-chaotic system. Then the compressed spectrum is encrypted by the discrete fractional random transform. The order of DFrRT and the parameters of the hyper-chaotic system are the main keys of this image compression and encryption algorithm. The proposed algorithm can compress and encrypt image signal, especially can encrypt multiple images once. To achieve the compression of multiple images, the images are transformed into spectra by the discrete cosine transform, and then the spectra are incised and spliced into a composite spectrum by Zigzag scanning. Simulation results demonstrate that the proposed image compression and encryption algorithm is of high security and good compression performance.

  1. Implementation of real-time nonuniformity correction with multiple NUC tables using FPGA in an uncooled imaging system

    NASA Astrophysics Data System (ADS)

    Oh, Gyong Jin; Kim, Lyang-June; Sheen, Sue-Ho; Koo, Gyou-Phyo; Jin, Sang-Hun; Yeo, Bo-Yeon; Lee, Jong-Ho

    2009-05-01

    This paper presents a real time implementation of Non Uniformity Correction (NUC). Two point correction and one point correction with shutter were carried out in an uncooled imaging system which will be applied to a missile application. To design a small, light weight and high speed imaging system for a missile system, SoPC (System On a Programmable Chip) which comprises of FPGA and soft core (Micro-blaze) was used. Real time NUC and generation of control signals are implemented using FPGA. Also, three different NUC tables were made to make the operating time shorter and to reduce the power consumption in a large range of environment temperature. The imaging system consists of optics and four electronics boards which are detector interface board, Analog to Digital converter board, Detector signal generation board and Power supply board. To evaluate the imaging system, NETD was measured. The NETD was less than 160mK in three different environment temperatures.

  2. Visual identification system for homeland security and law enforcement support

    NASA Astrophysics Data System (ADS)

    Samuel, Todd J.; Edwards, Don; Knopf, Michael

    2005-05-01

    This paper describes the basic configuration for a visual identification system (VIS) for Homeland Security and law enforcement support. Security and law enforcement systems with an integrated VIS will accurately and rapidly provide identification of vehicles or containers that have entered, exited or passed through a specific monitoring location. The VIS system stores all images and makes them available for recall for approximately one week. Images of alarming vehicles will be archived indefinitely as part of the alarming vehicle"s or cargo container"s record. Depending on user needs, the digital imaging information will be provided electronically to the individual inspectors, supervisors, and/or control center at the customer"s office. The key components of the VIS are the high-resolution cameras that capture images of vehicles, lights, presence sensors, image cataloging software, and image recognition software. In addition to the cameras, the physical integration and network communications of the VIS components with the balance of the security system and client must be ensured.

  3. Intelligent person identification system using stereo camera-based height and stride estimation

    NASA Astrophysics Data System (ADS)

    Ko, Jung-Hwan; Jang, Jae-Hun; Kim, Eun-Soo

    2005-05-01

    In this paper, a stereo camera-based intelligent person identification system is suggested. In the proposed method, face area of the moving target person is extracted from the left image of the input steros image pair by using a threshold value of YCbCr color model and by carrying out correlation between the face area segmented from this threshold value of YCbCr color model and the right input image, the location coordinates of the target face can be acquired, and then these values are used to control the pan/tilt system through the modified PID-based recursive controller. Also, by using the geometric parameters between the target face and the stereo camera system, the vertical distance between the target and stereo camera system can be calculated through a triangulation method. Using this calculated vertical distance and the angles of the pan and tilt, the target's real position data in the world space can be acquired and from them its height and stride values can be finally extracted. Some experiments with video images for 16 moving persons show that a person could be identified with these extracted height and stride parameters.

  4. Fully Three-Dimensional Virtual-Reality System

    NASA Technical Reports Server (NTRS)

    Beckman, Brian C.

    1994-01-01

    Proposed virtual-reality system presents visual displays to simulate free flight in three-dimensional space. System, virtual space pod, is testbed for control and navigation schemes. Unlike most virtual-reality systems, virtual space pod would not depend for orientation on ground plane, which hinders free flight in three dimensions. Space pod provides comfortable seating, convenient controls, and dynamic virtual-space images for virtual traveler. Controls include buttons plus joysticks with six degrees of freedom.

  5. Implementation of real-time digital endoscopic image processing system

    NASA Astrophysics Data System (ADS)

    Song, Chul Gyu; Lee, Young Mook; Lee, Sang Min; Kim, Won Ky; Lee, Jae Ho; Lee, Myoung Ho

    1997-10-01

    Endoscopy has become a crucial diagnostic and therapeutic procedure in clinical areas. Over the past four years, we have developed a computerized system to record and store clinical data pertaining to endoscopic surgery of laparascopic cholecystectomy, pelviscopic endometriosis, and surgical arthroscopy. In this study, we developed a computer system, which is composed of a frame grabber, a sound board, a VCR control board, a LAN card and EDMS. Also, computer system controls peripheral instruments such as a color video printer, a video cassette recorder, and endoscopic input/output signals. Digital endoscopic data management system is based on open architecture and a set of widely available industry standards; namely Microsoft Windows as an operating system, TCP/IP as a network protocol and a time sequential database that handles both images and speech. For the purpose of data storage, we used MOD and CD- R. Digital endoscopic system was designed to be able to store, recreate, change, and compress signals and medical images. Computerized endoscopy enables us to generate and manipulate the original visual document, making it accessible to a virtually unlimited number of physicians.

  6. Development of a hemispherical rotational modulation collimator system for imaging spatial distribution of radiation sources

    NASA Astrophysics Data System (ADS)

    Na, M.; Lee, S.; Kim, G.; Kim, H. S.; Rho, J.; Ok, J. G.

    2017-12-01

    Detecting and mapping the spatial distribution of radioactive materials is of great importance for environmental and security issues. We design and present a novel hemispherical rotational modulation collimator (H-RMC) system which can visualize the location of the radiation source by collecting signals from incident rays that go through collimator masks. The H-RMC system comprises a servo motor-controlled rotating module and a hollow heavy-metallic hemisphere with slits/slats equally spaced with the same angle subtended from the main axis. In addition, we also designed an auxiliary instrument to test the imaging performance of the H-RMC system, comprising a high-precision x- and y-axis staging station on which one can mount radiation sources of various shapes. We fabricated the H-RMC system which can be operated in a fully-automated fashion through the computer-based controller, and verify the accuracy and reproducibility of the system by measuring the rotational and linear positions with respect to the programmed values. Our H-RMC system may provide a pivotal tool for spatial radiation imaging with high reliability and accuracy.

  7. Tradeoff between insensitivity to depth-induced spherical aberration and resolution of 3D fluorescence imaging due to the use of wavefront encoding with a radially symmetric phase mask

    NASA Astrophysics Data System (ADS)

    Doblas, Ana; Dutta, Ananya; Saavedra, Genaro; Preza, Chrysanthe

    2018-02-01

    Previously, a wavefront encoded (WFE) imaging system implemented using a squared cubic (SQUBIC) phase mask has been verified to reduce the sensitivity of the imaging system to spherical aberration (SA). The strength of the SQUBIC phase mask and, as consequence, the performance of the WFE system are controlled by a design parameter, A. Although the higher the A-value, the more tolerant the WFE system is to SA, this is accomplished at the expense of the effective imaging resolution. In this contribution, we investigate this tradeoff in order to find an optimal A-value to balance the effect of SA and loss of resolution.

  8. Multispectral imaging of the ocular fundus using light emitting diode illumination

    NASA Astrophysics Data System (ADS)

    Everdell, N. L.; Styles, I. B.; Calcagni, A.; Gibson, J.; Hebden, J.; Claridge, E.

    2010-09-01

    We present an imaging system based on light emitting diode (LED) illumination that produces multispectral optical images of the human ocular fundus. It uses a conventional fundus camera equipped with a high power LED light source and a highly sensitive electron-multiplying charge coupled device camera. It is able to take pictures at a series of wavelengths in rapid succession at short exposure times, thereby eliminating the image shift introduced by natural eye movements (saccades). In contrast with snapshot systems the images retain full spatial resolution. The system is not suitable for applications where the full spectral resolution is required as it uses discrete wavebands for illumination. This is not a problem in retinal imaging where the use of selected wavelengths is common. The modular nature of the light source allows new wavelengths to be introduced easily and at low cost. The use of wavelength-specific LEDs as a source is preferable to white light illumination and subsequent filtering of the remitted light as it minimizes the total light exposure of the subject. The system is controlled via a graphical user interface that enables flexible control of intensity, duration, and sequencing of sources in synchrony with the camera. Our initial experiments indicate that the system can acquire multispectral image sequences of the human retina at exposure times of 0.05 s in the range of 500-620 nm with mean signal to noise ratio of 17 dB (min 11, std 4.5), making it suitable for quantitative analysis with application to the diagnosis and screening of eye diseases such as diabetic retinopathy and age-related macular degeneration.

  9. Multispectral imaging of the ocular fundus using light emitting diode illumination.

    PubMed

    Everdell, N L; Styles, I B; Calcagni, A; Gibson, J; Hebden, J; Claridge, E

    2010-09-01

    We present an imaging system based on light emitting diode (LED) illumination that produces multispectral optical images of the human ocular fundus. It uses a conventional fundus camera equipped with a high power LED light source and a highly sensitive electron-multiplying charge coupled device camera. It is able to take pictures at a series of wavelengths in rapid succession at short exposure times, thereby eliminating the image shift introduced by natural eye movements (saccades). In contrast with snapshot systems the images retain full spatial resolution. The system is not suitable for applications where the full spectral resolution is required as it uses discrete wavebands for illumination. This is not a problem in retinal imaging where the use of selected wavelengths is common. The modular nature of the light source allows new wavelengths to be introduced easily and at low cost. The use of wavelength-specific LEDs as a source is preferable to white light illumination and subsequent filtering of the remitted light as it minimizes the total light exposure of the subject. The system is controlled via a graphical user interface that enables flexible control of intensity, duration, and sequencing of sources in synchrony with the camera. Our initial experiments indicate that the system can acquire multispectral image sequences of the human retina at exposure times of 0.05 s in the range of 500-620 nm with mean signal to noise ratio of 17 dB (min 11, std 4.5), making it suitable for quantitative analysis with application to the diagnosis and screening of eye diseases such as diabetic retinopathy and age-related macular degeneration.

  10. Space imaging measurement system based on fixed lens and moving detector

    NASA Astrophysics Data System (ADS)

    Akiyama, Akira; Doshida, Minoru; Mutoh, Eiichiro; Kumagai, Hideo; Yamada, Hirofumi; Ishii, Hiromitsu

    2006-08-01

    We have developed the Space Imaging Measurement System based on the fixed lens and fast moving detector to the control of the autonomous ground vehicle. The space measurement is the most important task in the development of the autonomous ground vehicle. In this study we move the detector back and forth along the optical axis at the fast rate to measure the three-dimensional image data. This system is just appropriate to the autonomous ground vehicle because this system does not send out any optical energy to measure the distance and keep the safety. And we use the digital camera of the visible ray range. Therefore it gives us the cost reduction of the three-dimensional image data acquisition with respect to the imaging laser system. We can combine many pieces of the narrow space imaging measurement data to construct the wide range three-dimensional data. This gives us the improvement of the image recognition with respect to the object space. To develop the fast movement of the detector, we build the counter mass balance in the mechanical crank system of the Space Imaging Measurement System. And then we set up the duct to prevent the optical noise due to the ray not coming through lens. The object distance is derived from the focus distance which related to the best focused image data. The best focused image data is selected from the image of the maximum standard deviation in the standard deviations of series images.

  11. Document image archive transfer from DOS to UNIX

    NASA Technical Reports Server (NTRS)

    Hauser, Susan E.; Gill, Michael J.; Thoma, George R.

    1994-01-01

    An R&D division of the National Library of Medicine has developed a prototype system for automated document image delivery as an adjunct to the labor-intensive manual interlibrary loan service of the library. The document image archive is implemented by a PC controlled bank of optical disk drives which use 12 inch WORM platters containing bitmapped images of over 200,000 pages of medical journals. Following three years of routine operation which resulted in serving patrons with articles both by mail and fax, an effort is underway to relocate the storage environment from the DOS-based system to a UNIX-based jukebox whose magneto-optical erasable 5 1/4 inch platters hold the images. This paper describes the deficiencies of the current storage system, the design issues of modifying several modules in the system, the alternatives proposed and the tradeoffs involved.

  12. High-Speed Noninvasive Eye-Tracking System

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; LaBaw, Clayton; Michael-Morookian, John; Monacos, Steve; Serviss, Orin

    2007-01-01

    The figure schematically depicts a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. Like prior commercial noninvasive eye-tracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Relative to the prior commercial systems, the present system operates at much higher speed and thereby offers enhanced capability for applications that involve human-computer interactions, including typing and computer command and control by handicapped individuals,and eye-based diagnosis of physiological disorders that affect gaze responses.

  13. Hyperspectral imaging for food processing automation

    NASA Astrophysics Data System (ADS)

    Park, Bosoon; Lawrence, Kurt C.; Windham, William R.; Smith, Doug P.; Feldner, Peggy W.

    2002-11-01

    This paper presents the research results that demonstrates hyperspectral imaging could be used effectively for detecting feces (from duodenum, ceca, and colon) and ingesta on the surface of poultry carcasses, and potential application for real-time, on-line processing of poultry for automatic safety inspection. The hyperspectral imaging system included a line scan camera with prism-grating-prism spectrograph, fiber optic line lighting, motorized lens control, and hyperspectral image processing software. Hyperspectral image processing algorithms, specifically band ratio of dual-wavelength (565/517) images and thresholding were effective on the identification of fecal and ingesta contamination of poultry carcasses. A multispectral imaging system including a common aperture camera with three optical trim filters (515.4 nm with 8.6- nm FWHM), 566.4 nm with 8.8-nm FWHM, and 631 nm with 10.2-nm FWHM), which were selected and validated by a hyperspectral imaging system, was developed for a real-time, on-line application. A total image processing time required to perform the current multispectral images captured by a common aperture camera was approximately 251 msec or 3.99 frames/sec. A preliminary test shows that the accuracy of real-time multispectral imaging system to detect feces and ingesta on corn/soybean fed poultry carcasses was 96%. However, many false positive spots that cause system errors were also detected.

  14. A design of camera simulator for photoelectric image acquisition system

    NASA Astrophysics Data System (ADS)

    Cai, Guanghui; Liu, Wen; Zhang, Xin

    2015-02-01

    In the process of developing the photoelectric image acquisition equipment, it needs to verify the function and performance. In order to make the photoelectric device recall the image data formerly in the process of debugging and testing, a design scheme of the camera simulator is presented. In this system, with FPGA as the control core, the image data is saved in NAND flash trough USB2.0 bus. Due to the access rate of the NAND, flash is too slow to meet the requirement of the sytsem, to fix the problem, the pipeline technique and the High-Band-Buses technique are applied in the design to improve the storage rate. It reads image data out from flash in the control logic of FPGA and output separately from three different interface of Camera Link, LVDS and PAL, which can provide image data for photoelectric image acquisition equipment's debugging and algorithm validation. However, because the standard of PAL image resolution is 720*576, the resolution is different between PAL image and input image, so the image can be output after the resolution conversion. The experimental results demonstrate that the camera simulator outputs three format image sequence correctly, which can be captured and displayed by frame gather. And the three-format image data can meet test requirements of the most equipment, shorten debugging time and improve the test efficiency.

  15. Multiple-etalon systems for the Advanced Technology Solar Telescope

    NASA Technical Reports Server (NTRS)

    Gary, G. Allen; Balasubramaniam, K. S.; Sigwarth, Michael

    2003-01-01

    Multiple etalon systems are discussed that meet the science requirements for a narrow-passband imaging system for the 4-meter National Solar Observatory (NSO)/Advance Technology Solar Telescope (ATST). A multiple etalon system can provide an imaging interferometer that works in four distinct modes: as a spectro-polarimeter, a filter-vector magnetograph, an intermediate-band imager, and broadband high-resolution imager. Specific dual and triple etalon configurations are described that provide a spectrographic passband of 2.0-3.5 micron and reduce parasitic light levels to 10(exp -4) as required for precise polarization measurement, e.g., Zeeman measurements of magnetic sensitive lines. A TESOS-like (Telecentric Etalon SOlar Spectrometer) triple etalon system provides a spectral purity of 10(exp -5). The triple designs have the advantage of reducing the finesse requirement on each etalon; allow the use of more stable blocking filters, and have very high spectral purity. A dual-etalon double-pass (Cavallini-like) system can provide a competing configuration. Such a dual-etalon design can provide high contrast. The selection of the final focal plane instrument will depend on a trade-off between an ideal instrument and practical reality. The trade study will include the number of etalons, their aperture sizes, complexities of the optical train, number of blocking filters, configuration of the electronic control system, computer interfaces, temperature controllers, etalon controllers, and their associated feedback electronics. The heritage of single and multiple etalon systems comes from their use in several observatories, including the Marshall Space Flight Center (MSFC) Solar Observatory, Sacramento Peak Observatory (NSO), and Kiepenheuer-Institut fur Sonnenphysik (KIS, Germany), Mees Solar Observatory (University of Hawaii), and Arcetri Astrophysical Observatory (Italy). The design of the ATST multiple etalon system will benefit from the experience gained at these observatories.

  16. Autonomous agricultural remote sensing systems with high spatial and temporal resolutions

    NASA Astrophysics Data System (ADS)

    Xiang, Haitao

    In this research, two novel agricultural remote sensing (RS) systems, a Stand-alone Infield Crop Monitor RS System (SICMRS) and an autonomous Unmanned Aerial Vehicles (UAV) based RS system have been studied. A high-resolution digital color and multi-spectral camera was used as the image sensor for the SICMRS system. An artificially intelligent (AI) controller based on artificial neural network (ANN) and an adaptive neuro-fuzzy inference system (ANFIS) was developed. Morrow Plots corn field RS images in the 2004 and 2006 growing seasons were collected by the SICMRS system. The field site contained 8 subplots (9.14 m x 9.14 m) that were planted with corn and three different fertilizer treatments were used among those subplots. The raw RS images were geometrically corrected, resampled to 10cm resolution, removed soil background and calibrated to real reflectance. The RS images from two growing seasons were studied and 10 different vegetation indices were derived from each day's image. The result from the image processing demonstrated that the vegetation indices have temporal effects. To achieve high quality RS data, one has to utilize the right indices and capture the images at the right time in the growing season. Maximum variations among the image data set are within the V6-V10 stages, which indicated that these stages are the best period to identify the spatial variability caused by the nutrient stress in the corn field. The derived vegetation indices were also used to build yield prediction models via the linear regression method. At that point, all of the yield prediction models were evaluated by comparing the R2-value and the best index model from each day's image was picked based on the highest R 2-value. It was shown that the green normalized difference vegetation (GNDVI) based model is more sensitive to yield prediction than other indices-based models. During the VT-R4 stages, the GNDVI based models were able to explain more than 95% potential corn yield consistently for both seasons. The VT-R4 stages are the best period of time to estimate the corn yield. The SICMS system is only suitable for the RS research at a fixed location. In order to provide more flexibility of the RS image collection, a novel UAV based system has been studied. The UAV based agricultural RS system used a light helicopter platform equipped with a multi-spectral camera. The UAV control system consisted of an on-board and a ground station subsystem. For the on-board subsystem, an Extended Kalman Filter (EKF) based UAV navigation system was designed and implemented. The navigation system, using low cost inertial sensors, magnetometer, GPS and a single board computer, was capable of providing continuous estimates of UAV position and attitude at 50 Hz using sensor fusion techniques. The ground station subsystem was designed to be an interface between a human operator and the UAV to implement mission planning, flight command activation, and real-time flight monitoring. The navigation system is controlled by the ground station, and able to navigate the UAV in the air to reach the predefined waypoints and trigger the multi-spectral camera. By so doing, the aerial images at each point could be captured automatically. The developed UAV RS system can provide a maximum flexibility in crop field RS image collection. It is essential to perform the geometric correction and the geocoding before an aerial image can be used for precision farming. An automatic (no Ground Control Point (GCP) needed) UAV image georeferencing algorithm was developed. This algorithm can do the automatic image correction and georeferencing based on the real-time navigation data and a camera lens distortion model. The accuracy of the georeferencing algorithm was better than 90 cm according to a series test. The accuracy that has been achieved indicates that, not only is the position solution good, but the attitude error is extremely small. The waypoints planning for UAV flight was investigated. It suggested that a 16.5% forward overlap and a 15% lateral overlap were required to avoiding missing desired mapping area when the UAV flies above 45 m high with 4.5 mm lens. A whole field mosaic image can be generated according to the individual image georeferencing information. A 0.569 m mosaic error has been achieved and this accuracy is sufficient for many of the intended precision agricultural applications. With careful interpretation, the UAV images are an excellent source of high spatial and temporal resolution data for precision agricultural applications. (Abstract shortened by UMI.)

  17. Meniscus Imaging for Crystal-Growth Control

    NASA Technical Reports Server (NTRS)

    Sachs, E. M.

    1983-01-01

    Silicon crystal growth monitored by new video system reduces operator stress and improves conditions for observation and control of growing process. System optics produce greater magnification vertically than horizontally, so entire meniscus and melt is viewed with high resolution in both width and height dimensions.

  18. Ultra-fast bright field and fluorescence imaging of the dynamics of micrometer-sized objects

    NASA Astrophysics Data System (ADS)

    Chen, Xucai; Wang, Jianjun; Versluis, Michel; de Jong, Nico; Villanueva, Flordeliza S.

    2013-06-01

    High speed imaging has application in a wide area of industry and scientific research. In medical research, high speed imaging has the potential to reveal insight into mechanisms of action of various therapeutic interventions. Examples include ultrasound assisted thrombolysis, drug delivery, and gene therapy. Visual observation of the ultrasound, microbubble, and biological cell interaction may help the understanding of the dynamic behavior of microbubbles and may eventually lead to better design of such delivery systems. We present the development of a high speed bright field and fluorescence imaging system that incorporates external mechanical waves such as ultrasound. Through collaborative design and contract manufacturing, a high speed imaging system has been successfully developed at the University of Pittsburgh Medical Center. We named the system "UPMC Cam," to refer to the integrated imaging system that includes the multi-frame camera and its unique software control, the customized modular microscope, the customized laser delivery system, its auxiliary ultrasound generator, and the combined ultrasound and optical imaging chamber for in vitro and in vivo observations. This system is capable of imaging microscopic bright field and fluorescence movies at 25 × 106 frames per second for 128 frames, with a frame size of 920 × 616 pixels. Example images of microbubble under ultrasound are shown to demonstrate the potential application of the system.

  19. Ultra-fast bright field and fluorescence imaging of the dynamics of micrometer-sized objects

    PubMed Central

    Chen, Xucai; Wang, Jianjun; Versluis, Michel; de Jong, Nico; Villanueva, Flordeliza S.

    2013-01-01

    High speed imaging has application in a wide area of industry and scientific research. In medical research, high speed imaging has the potential to reveal insight into mechanisms of action of various therapeutic interventions. Examples include ultrasound assisted thrombolysis, drug delivery, and gene therapy. Visual observation of the ultrasound, microbubble, and biological cell interaction may help the understanding of the dynamic behavior of microbubbles and may eventually lead to better design of such delivery systems. We present the development of a high speed bright field and fluorescence imaging system that incorporates external mechanical waves such as ultrasound. Through collaborative design and contract manufacturing, a high speed imaging system has been successfully developed at the University of Pittsburgh Medical Center. We named the system “UPMC Cam,” to refer to the integrated imaging system that includes the multi-frame camera and its unique software control, the customized modular microscope, the customized laser delivery system, its auxiliary ultrasound generator, and the combined ultrasound and optical imaging chamber for in vitro and in vivo observations. This system is capable of imaging microscopic bright field and fluorescence movies at 25 × 106 frames per second for 128 frames, with a frame size of 920 × 616 pixels. Example images of microbubble under ultrasound are shown to demonstrate the potential application of the system. PMID:23822346

  20. Visual Exploration of Genetic Association with Voxel-based Imaging Phenotypes in an MCI/AD Study

    PubMed Central

    Kim, Sungeun; Shen, Li; Saykin, Andrew J.; West, John D.

    2010-01-01

    Neuroimaging genomics is a new transdisciplinary research field, which aims to examine genetic effects on brain via integrated analyses of high throughput neuroimaging and genomic data. We report our recent work on (1) developing an imaging genomic browsing system that allows for whole genome and entire brain analyses based on visual exploration and (2) applying the system to the imaging genomic analysis of an existing MCI/AD cohort. Voxel-based morphometry is used to define imaging phenotypes. ANCOVA is employed to evaluate the effect of the interaction of genotypes and diagnosis in relation to imaging phenotypes while controlling for relevant covariates. Encouraging experimental results suggest that the proposed system has substantial potential for enabling discovery of imaging genomic associations through visual evaluation and for localizing candidate imaging regions and genomic regions for refined statistical modeling. PMID:19963597

  1. An e-Learning System with MR for Experiments Involving Circuit Construction to Control a Robot

    ERIC Educational Resources Information Center

    Takemura, Atsushi

    2016-01-01

    This paper proposes a novel e-Learning system for technological experiments involving electronic circuit-construction and controlling robot motion that are necessary in the field of technology. The proposed system performs automated recognition of circuit images transmitted from individual learners and automatically supplies the learner with…

  2. Configuration of automatic exposure control on mammography units for computed radiography to match patient dose of screen film systems

    NASA Astrophysics Data System (ADS)

    Yang, Chang-Ying Joseph; Huang, Weidong

    2009-02-01

    Computed radiography (CR) is considered a drop-in addition or replacement for traditional screen-film (SF) systems in digital mammography. Unlike other technologies, CR has the advantage of being compatible with existing mammography units. One of the challenges, however, is to properly configure the automatic exposure control (AEC) on existing mammography units for CR use. Unlike analogue systems, the capture and display of digital CR images is decoupled. The function of AEC is changed from ensuring proper and consistent optical density of the captured image on film to balancing image quality with patient dose needed for CR. One of the preferences when acquiring CR images under AEC is to use the same patient dose as SF systems. The challenge is whether the existing AEC design and calibration process-most of them proprietary from the X-ray systems manufacturers and tailored specifically for SF response properties-can be adapted for CR cassettes, in order to compensate for their response and attenuation differences. This paper describes the methods for configuring the AEC of three different mammography units models to match the patient dose used for CR with those that are used for a KODAK MIN-R 2000 SF System. Based on phantom test results, these methods provide the dose level under AEC for the CR systems to match with the dose of SF systems. These methods can be used in clinical environments that require the acquisition of CR images under AEC at the same dose levels as those used for SF systems.

  3. STRIPE: Remote Driving Using Limited Image Data

    NASA Technical Reports Server (NTRS)

    Kay, Jennifer S.

    1997-01-01

    Driving a vehicle, either directly or remotely, is an inherently visual task. When heavy fog limits visibility, we reduce our car's speed to a slow crawl, even along very familiar roads. In teleoperation systems, an operator's view is limited to images provided by one or more cameras mounted on the remote vehicle. Traditional methods of vehicle teleoperation require that a real time stream of images is transmitted from the vehicle camera to the operator control station, and the operator steers the vehicle accordingly. For this type of teleoperation, the transmission link between the vehicle and operator workstation must be very high bandwidth (because of the high volume of images required) and very low latency (because delayed images can cause operators to steer incorrectly). In many situations, such a high-bandwidth, low-latency communication link is unavailable or even technically impossible to provide. Supervised TeleRobotics using Incremental Polyhedral Earth geometry, or STRIPE, is a teleoperation system for a robot vehicle that allows a human operator to accurately control the remote vehicle across very low bandwidth communication links, and communication links with large delays. In STRIPE, a single image from a camera mounted on the vehicle is transmitted to the operator workstation. The operator uses a mouse to pick a series of 'waypoints' in the image that define a path that the vehicle should follow. These 2D waypoints are then transmitted back to the vehicle, where they are used to compute the appropriate steering commands while the next image is being transmitted. STRIPE requires no advance knowledge of the terrain to be traversed, and can be used by novice operators with only minimal training. STRIPE is a unique combination of computer and human control. The computer must determine the 3D world path designated by the 2D waypoints and then accurately control the vehicle over rugged terrain. The human issues involve accurate path selection, and the prevention of disorientation, a common problem across all types of teleoperation systems. STRIPE is the only semi-autonomous teleoperation system that can accurately follow paths designated in monocular images on varying terrain. The thesis describes the STRIPE algorithm for tracking points using the incremental geometry model, insight into the design and redesign of the interface, an analysis of the effects of potential errors, details of the user studies, and hints on how to improve both the algorithm and interface for future designs.

  4. CMOS Image Sensors: Electronic Camera On A Chip

    NASA Technical Reports Server (NTRS)

    Fossum, E. R.

    1995-01-01

    Recent advancements in CMOS image sensor technology are reviewed, including both passive pixel sensors and active pixel sensors. On- chip analog to digital converters and on-chip timing and control circuits permit realization of an electronic camera-on-a-chip. Highly miniaturized imaging systems based on CMOS image sensor technology are emerging as a competitor to charge-coupled devices for low cost uses.

  5. DDGIPS: a general image processing system in robot vision

    NASA Astrophysics Data System (ADS)

    Tian, Yuan; Ying, Jun; Ye, Xiuqing; Gu, Weikang

    2000-10-01

    Real-Time Image Processing is the key work in robot vision. With the limitation of the hardware technique, many algorithm-oriented firmware systems were designed in the past. But their architectures were not flexible enough to achieve a multi-algorithm development system. Because of the rapid development of microelectronics technique, many high performance DSP chips and high density FPGA chips have come to life, and this makes it possible to construct a more flexible architecture in real-time image processing system. In this paper, a Double DSP General Image Processing System (DDGIPS) is concerned. We try to construct a two-DSP-based FPGA-computational system with two TMS320C6201s. The TMS320C6x devices are fixed-point processors based on the advanced VLIW CPU, which has eight functional units, including two multipliers and six arithmetic logic units. These features make C6x a good candidate for a general purpose system. In our system, the two TMS320C6201s each has a local memory space, and they also have a shared system memory space which enables them to intercommunicate and exchange data efficiently. At the same time, they can be directly inter-connected in star-shaped architecture. All of these are under the control of a FPGA group. As the core of the system, FPGA plays a very important role: it takes charge of DPS control, DSP communication, memory space access arbitration and the communication between the system and the host machine. And taking advantage of reconfiguring FPGA, all of the interconnection between the two DSP or between DSP and FPGA can be changed. In this way, users can easily rebuild the real-time image processing system according to the data stream and the task of the application and gain great flexibility.

  6. DDGIPS: a general image processing system in robot vision

    NASA Astrophysics Data System (ADS)

    Tian, Yuan; Ying, Jun; Ye, Xiuqing; Gu, Weikang

    2000-10-01

    Real-Time Image Processing is the key work in robot vision. With the limitation of the hardware technique, many algorithm-oriented firmware systems were designed in the past. But their architectures were not flexible enough to achieve a multi- algorithm development system. Because of the rapid development of microelectronics technique, many high performance DSP chips and high density FPGA chips have come to life, and this makes it possible to construct a more flexible architecture in real-time image processing system. In this paper, a Double DSP General Image Processing System (DDGIPS) is concerned. We try to construct a two-DSP-based FPGA-computational system with two TMS320C6201s. The TMS320C6x devices are fixed-point processors based on the advanced VLIW CPU, which has eight functional units, including two multipliers and six arithmetic logic units. These features make C6x a good candidate for a general purpose system. In our system, the two TMS320C6210s each has a local memory space, and they also have a shared system memory space which enable them to intercommunicate and exchange data efficiently. At the same time, they can be directly interconnected in star- shaped architecture. All of these are under the control of FPGA group. As the core of the system, FPGA plays a very important role: it takes charge of DPS control, DSP communication, memory space access arbitration and the communication between the system and the host machine. And taking advantage of reconfiguring FPGA, all of the interconnection between the two DSP or between DSP and FPGA can be changed. In this way, users can easily rebuild the real-time image processing system according to the data stream and the task of the application and gain great flexibility.

  7. Full image-processing pipeline in field-programmable gate array for a small endoscopic camera

    NASA Astrophysics Data System (ADS)

    Mostafa, Sheikh Shanawaz; Sousa, L. Natércia; Ferreira, Nuno Fábio; Sousa, Ricardo M.; Santos, Joao; Wäny, Martin; Morgado-Dias, F.

    2017-01-01

    Endoscopy is an imaging procedure used for diagnosis as well as for some surgical purposes. The camera used for the endoscopy should be small and able to produce a good quality image or video, to reduce discomfort of the patients, and to increase the efficiency of the medical team. To achieve these fundamental goals, a small endoscopy camera with a footprint of 1 mm×1 mm×1.65 mm is used. Due to the physical properties of the sensors and human vision system limitations, different image-processing algorithms, such as noise reduction, demosaicking, and gamma correction, among others, are needed to faithfully reproduce the image or video. A full image-processing pipeline is implemented using a field-programmable gate array (FPGA) to accomplish a high frame rate of 60 fps with minimum processing delay. Along with this, a viewer has also been developed to display and control the image-processing pipeline. The control and data transfer are done by a USB 3.0 end point in the computer. The full developed system achieves real-time processing of the image and fits in a Xilinx Spartan-6LX150 FPGA.

  8. Flexible distributed architecture for semiconductor process control and experimentation

    NASA Astrophysics Data System (ADS)

    Gower, Aaron E.; Boning, Duane S.; McIlrath, Michael B.

    1997-01-01

    Semiconductor fabrication requires an increasingly expensive and integrated set of tightly controlled processes, driving the need for a fabrication facility with fully computerized, networked processing equipment. We describe an integrated, open system architecture enabling distributed experimentation and process control for plasma etching. The system was developed at MIT's Microsystems Technology Laboratories and employs in-situ CCD interferometry based analysis in the sensor-feedback control of an Applied Materials Precision 5000 Plasma Etcher (AME5000). Our system supports accelerated, advanced research involving feedback control algorithms, and includes a distributed interface that utilizes the internet to make these fabrication capabilities available to remote users. The system architecture is both distributed and modular: specific implementation of any one task does not restrict the implementation of another. The low level architectural components include a host controller that communicates with the AME5000 equipment via SECS-II, and a host controller for the acquisition and analysis of the CCD sensor images. A cell controller (CC) manages communications between these equipment and sensor controllers. The CC is also responsible for process control decisions; algorithmic controllers may be integrated locally or via remote communications. Finally, a system server images connections from internet/intranet (web) based clients and uses a direct link with the CC to access the system. Each component communicates via a predefined set of TCP/IP socket based messages. This flexible architecture makes integration easier and more robust, and enables separate software components to run on the same or different computers independent of hardware or software platform.

  9. Automatic Quadcopter Control Avoiding Obstacle Using Camera with Integrated Ultrasonic Sensor

    NASA Astrophysics Data System (ADS)

    Anis, Hanafi; Haris Indra Fadhillah, Ahmad; Darma, Surya; Soekirno, Santoso

    2018-04-01

    Automatic navigation on the drone is being developed these days, a wide variety of types of drones and its automatic functions. Drones used in this study was an aircraft with four propellers or quadcopter. In this experiment, image processing used to recognize the position of an object and ultrasonic sensor used to detect obstacle distance. The method used to trace an obsctacle in image processing was the Lucas-Kanade-Tomasi Tracker, which had been widely used due to its high accuracy. Ultrasonic sensor used to complement the image processing success rate to be fully detected object. The obstacle avoidance system was to observe at the program decisions from some obstacle conditions read by the camera and ultrasonic sensors. Visual feedback control based PID controllers are used as a control of drones movement. The conclusion of the obstacle avoidance system was to observe at the program decisions from some obstacle conditions read by the camera and ultrasonic sensors.

  10. Adaptive temperature profile control of a multizone crystal growth furnace

    NASA Technical Reports Server (NTRS)

    Batur, C.; Sharpless, R. B.; Duval, W. M. B.; Rosenthal, B. N.

    1991-01-01

    An intelligent measurement system is described which is used to assess the shape of a crystal while it is growing inside a multizone transparent furnace. A color video imaging system observes the crystal in real time, and determines the position and the shape of the interface. This information is used to evaluate the crystal growth rate, and to analyze the effects of translational velocity and temperature profiles on the shape of the interface. Creation of this knowledge base is the first step to incorporate image processing into furnace control.

  11. Automation of image data processing. (Polish Title: Automatyzacja proces u przetwarzania danych obrazowych)

    NASA Astrophysics Data System (ADS)

    Preuss, R.

    2014-12-01

    This article discusses the current capabilities of automate processing of the image data on the example of using PhotoScan software by Agisoft. At present, image data obtained by various registration systems (metric and non - metric cameras) placed on airplanes, satellites, or more often on UAVs is used to create photogrammetric products. Multiple registrations of object or land area (large groups of photos are captured) are usually performed in order to eliminate obscured area as well as to raise the final accuracy of the photogrammetric product. Because of such a situation t he geometry of the resulting image blocks is far from the typical configuration of images. For fast images georeferencing automatic image matching algorithms are currently applied. They can create a model of a block in the local coordinate system or using initial exterior orientation and measured control points can provide image georeference in an external reference frame. In the case of non - metric image application, it is also possible to carry out self - calibration process at this stage. Image matching algorithm is also used in generation of dense point clouds reconstructing spatial shape of the object (area). In subsequent processing steps it is possible to obtain typical photogrammetric products such as orthomosaic, DSM or DTM and a photorealistic solid model of an object . All aforementioned processing steps are implemented in a single program in contrary to standard commercial software dividing all steps into dedicated modules. Image processing leading to final geo referenced products can be fully automated including sequential implementation of the processing steps at predetermined control parameters. The paper presents the practical results of the application fully automatic generation of othomosaic for both images obtained by a metric Vexell camera and a block of images acquired by a non - metric UAV system

  12. Tangible imaging systems

    NASA Astrophysics Data System (ADS)

    Ferwerda, James A.

    2013-03-01

    We are developing tangible imaging systems1-4 that enable natural interaction with virtual objects. Tangible imaging systems are based on consumer mobile devices that incorporate electronic displays, graphics hardware, accelerometers, gyroscopes, and digital cameras, in laptop or tablet-shaped form-factors. Custom software allows the orientation of a device and the position of the observer to be tracked in real-time. Using this information, realistic images of threedimensional objects with complex textures and material properties are rendered to the screen, and tilting or moving in front of the device produces realistic changes in surface lighting and material appearance. Tangible imaging systems thus allow virtual objects to be observed and manipulated as naturally as real ones with the added benefit that object properties can be modified under user control. In this paper we describe four tangible imaging systems we have developed: the tangiBook - our first implementation on a laptop computer; tangiView - a more refined implementation on a tablet device; tangiPaint - a tangible digital painting application; and phantoView - an application that takes the tangible imaging concept into stereoscopic 3D.

  13. Focused US system for MR imaging-guided tumor ablation.

    PubMed

    Cline, H E; Hynynen, K; Watkins, R D; Adams, W J; Schenck, J F; Ettinger, R H; Freund, W R; Vetro, J P; Jolesz, F A

    1995-03-01

    To measure the performance characteristics of a focused ultrasound (US) system for magnetic resonance (MR) imaging-guided tumor ablation. The authors constructed a focused US system for MR imaging-guided tumor ablation. The location of the heated region and thermal dose were monitored with temperature-sensitive MR images obtained in phantoms and rabbit skeletal muscle after application of each sonic pulse. The region heated by the focused ultrasound beam was within 1 mm of that observed on temperature-sensitive fast gradient-echo MR images of in vivo rabbit skeletal muscle. Analysis of heat flow and the rate of coagulation necrosis provided an estimate of the size of the ablated region that was in agreement with experimental findings. MR imaging provides target definition and control for thermal therapy in regions of variable perfusion or in tissues that are not well characterized.

  14. Robotically assisted ureteroscopy for kidney exploration.

    PubMed

    Talari, Hadi F; Monfaredi, Reza; Wilson, Emmanuel; Blum, Emily; Bayne, Christopher; Peters, Craig; Zhang, Anlin; Cleary, Kevin

    2017-02-01

    Ureteroscopy is a minimally invasive procedure for diagnosis and treatment of a wide range of urinary tract pathologies. It is most commonly performed in the diagnostic work-up of hematuria and the diagnosis and treatment of upper urinary tract malignancies and calculi. Ergonomic and visualization challenges as well as radiation exposure are limitations to conventional ureteroscopy. For example, for diagnostic tumor inspection, the urologist has to maneuver the ureteroscope through each of the 6 to 12 calyces in the kidney under fluoroscopy to ensure complete surveillance. Therefore, we have been developing a robotic system to "power drive" a flexible fiber-optic ureteroscope with 3D tip tracking and pre-operative image overlay. Our goal is to provide the urologist precise control of the ureteroscope tip with less radiation exposure. Our prototype system allows control of the three degrees of freedom of the ureteroscope via brushless motors and a joystick interface. The robot provides a steady platform for controlling the ureteroscope. Furthermore, the robot design facilitates a quick "snap-in" of the ureteroscope, thus allowing the ureteroscope to be mounted midway through the procedure. We have completed the mechanical system and the controlling software and begun evaluation using a kidney phantom. We put MRI-compatible fiducials on the phantom and obtained MR images. We registered these images with the robot using an electromagnetic tracking system and paired-point registration. The system is described and initial evaluation results are given in this paper.

  15. Endovascular MR-guided Renal Embolization by Using a Magnetically Assisted Remote-controlled Catheter System

    PubMed Central

    Lillaney, Prasheel V.; Yang, Jeffrey K.; Losey, Aaron D.; Martin, Alastair J.; Cooke, Daniel L.; Thorne, Bradford R. H.; Barry, David C.; Chu, Andrew; Stillson, Carol; Do, Loi; Arenson, Ronald L.; Saeed, Maythem; Wilson, Mark W.

    2016-01-01

    Purpose To assess the feasibility of a magnetically assisted remote-controlled (MARC) catheter system under magnetic resonance (MR) imaging guidance for performing a simple endovascular procedure (ie, renal artery embolization) in vivo and to compare with x-ray guidance to determine the value of MR imaging guidance and the specific areas where the MARC system can be improved. Materials and Methods In concordance with the Institutional Animal Care and Use Committee protocol, in vivo renal artery navigation and embolization were tested in three farm pigs (mean weight 43 kg ± 2 [standard deviation]) under real-time MR imaging at 1.5 T. The MARC catheter device was constructed by using an intramural copper-braided catheter connected to a laser-lithographed saddle coil at the distal tip. Interventionalists controlled an in-room cart that delivered electrical current to deflect the catheter in the MR imager. Contralateral kidneys were similarly embolized under x-ray guidance by using standard clinical catheters and guidewires. Changes in renal artery flow and perfusion were measured before and after embolization by using velocity-encoded and perfusion MR imaging. Catheter navigation times, renal parenchymal perfusion, and renal artery flow rates were measured for MR-guided and x-ray–guided embolization procedures and are presented as means ± standard deviation in this pilot study. Results Embolization was successful in all six kidneys under both x-ray and MR imaging guidance. Mean catheterization time with MR guidance was 93 seconds ± 56, compared with 60 seconds ± 22 for x-ray guidance. Mean changes in perfusion rates were 4.9 au/sec ± 0.8 versus 4.6 au/sec ± 0.6, and mean changes in renal flow rate were 2.1 mL/min/g ± 0.2 versus 1.9 mL/min/g ± 0.2 with MR imaging and x-ray guidance, respectively. Conclusion The MARC catheter system is feasible for renal artery catheterization and embolization under real-time MR imaging in vivo, and quantitative physiologic measures under MR imaging guidance were similar to those measured under x-ray guidance, suggesting that the MARC catheter system could be used for endovascular procedures with interventional MR imaging. © RSNA, 2016 PMID:27019290

  16. Endovascular MR-guided Renal Embolization by Using a Magnetically Assisted Remote-controlled Catheter System.

    PubMed

    Lillaney, Prasheel V; Yang, Jeffrey K; Losey, Aaron D; Martin, Alastair J; Cooke, Daniel L; Thorne, Bradford R H; Barry, David C; Chu, Andrew; Stillson, Carol; Do, Loi; Arenson, Ronald L; Saeed, Maythem; Wilson, Mark W; Hetts, Steven W

    2016-10-01

    Purpose To assess the feasibility of a magnetically assisted remote-controlled (MARC) catheter system under magnetic resonance (MR) imaging guidance for performing a simple endovascular procedure (ie, renal artery embolization) in vivo and to compare with x-ray guidance to determine the value of MR imaging guidance and the specific areas where the MARC system can be improved. Materials and Methods In concordance with the Institutional Animal Care and Use Committee protocol, in vivo renal artery navigation and embolization were tested in three farm pigs (mean weight 43 kg ± 2 [standard deviation]) under real-time MR imaging at 1.5 T. The MARC catheter device was constructed by using an intramural copper-braided catheter connected to a laser-lithographed saddle coil at the distal tip. Interventionalists controlled an in-room cart that delivered electrical current to deflect the catheter in the MR imager. Contralateral kidneys were similarly embolized under x-ray guidance by using standard clinical catheters and guidewires. Changes in renal artery flow and perfusion were measured before and after embolization by using velocity-encoded and perfusion MR imaging. Catheter navigation times, renal parenchymal perfusion, and renal artery flow rates were measured for MR-guided and x-ray-guided embolization procedures and are presented as means ± standard deviation in this pilot study. Results Embolization was successful in all six kidneys under both x-ray and MR imaging guidance. Mean catheterization time with MR guidance was 93 seconds ± 56, compared with 60 seconds ± 22 for x-ray guidance. Mean changes in perfusion rates were 4.9 au/sec ± 0.8 versus 4.6 au/sec ± 0.6, and mean changes in renal flow rate were 2.1 mL/min/g ± 0.2 versus 1.9 mL/min/g ± 0.2 with MR imaging and x-ray guidance, respectively. Conclusion The MARC catheter system is feasible for renal artery catheterization and embolization under real-time MR imaging in vivo, and quantitative physiologic measures under MR imaging guidance were similar to those measured under x-ray guidance, suggesting that the MARC catheter system could be used for endovascular procedures with interventional MR imaging. (©) RSNA, 2016.

  17. A real-time monitoring system for night glare protection

    NASA Astrophysics Data System (ADS)

    Ma, Jun; Ni, Xuxiang

    2010-11-01

    When capturing a dark scene with a high bright object, the monitoring camera will be saturated in some regions and the details will be lost in and near these saturated regions because of the glare vision. This work aims at developing a real-time night monitoring system. The system can decrease the influence of the glare vision and gain more details from the ordinary camera when exposing a high-contrast scene like a car with its headlight on during night. The system is made up of spatial light modulator (The liquid crystal on silicon: LCoS), image sensor (CCD), imaging lens and DSP. LCoS, a reflective liquid crystal, can modular the intensity of reflective light at every pixel as a digital device. Through modulation function of LCoS, CCD is exposed with sub-region. With the control of DSP, the light intensity is decreased to minimum in the glare regions, and the light intensity is negative feedback modulated based on PID theory in other regions. So that more details of the object will be imaging on CCD and the glare protection of monitoring system is achieved. In experiments, the feedback is controlled by the embedded system based on TI DM642. Experiments shows: this feedback modulation method not only reduces the glare vision to improve image quality, but also enhances the dynamic range of image. The high-quality and high dynamic range image is real-time captured at 30hz. The modulation depth of LCoS determines how strong the glare can be removed.

  18. Real-time laser cladding control with variable spot size

    NASA Astrophysics Data System (ADS)

    Arias, J. L.; Montealegre, M. A.; Vidal, F.; Rodríguez, J.; Mann, S.; Abels, P.; Motmans, F.

    2014-03-01

    Laser cladding processing has been used in different industries to improve the surface properties or to reconstruct damaged pieces. In order to cover areas considerably larger than the diameter of the laser beam, successive partially overlapping tracks are deposited. With no control over the process variables this conduces to an increase of the temperature, which could decrease mechanical properties of the laser cladded material. Commonly, the process is monitored and controlled by a PC using cameras, but this control suffers from a lack of speed caused by the image processing step. The aim of this work is to design and develop a FPGA-based laser cladding control system. This system is intended to modify the laser beam power according to the melt pool width, which is measured using a CMOS camera. All the control and monitoring tasks are carried out by a FPGA, taking advantage of its abundance of resources and speed of operation. The robustness of the image processing algorithm is assessed, as well as the control system performance. Laser power is decreased as substrate temperature increases, thus maintaining a constant clad width. This FPGA-based control system is integrated in an adaptive laser cladding system, which also includes an adaptive optical system that will control the laser focus distance on the fly. The whole system will constitute an efficient instrument for part repair with complex geometries and coating selective surfaces. This will be a significant step forward into the total industrial implementation of an automated industrial laser cladding process.

  19. Initial experience with a radiology imaging network to newborn and intensive care units.

    PubMed

    Witt, R M; Cohen, M D; Appledorn, C R

    1991-02-01

    A digital image network has been installed in the James Whitcomb Riley Hospital for Children on the Indiana University Medical Center to create a limited all digital imaging system. The system is composed of commercial components, Philips/AT&T CommView system, (Philips Medical Systems, Shelton, CT; AT&T Bell Laboratories, West Long Beach, NJ) and connects an existing Philips Computed Radiology (PCR) system to two remote workstations that reside in the intensive care unit and the newborn nursery. The purpose of the system is to display images obtained from the PCR system on the remote workstations for direct viewing by referring clinicians, and to reduce many of their visits to the radiology reading room three floors away. The design criteria includes the ability to centrally control all image management functions on the remote workstations to relieve the clinicians from any image management tasks except for recalling patient images. The principal components of the system are the Philips PCR system, the acquisition module (AM), and the PCR interface to the Data Management Module (DMM). Connected to the DMM are an Enhanced Graphics Display Workstation (EGDW), an optical disk drive, and a network gateway to an ethernet link. The ethernet network is the connection to the two Results Viewing Stations (RVS) and both RVSs are approximately 100 m from the gateway. The DMM acts as an image file server and an image archive device. The DMM manages the image data base and can load images to the EGDW and the two RVSs. The system has met the initial design specifications and can successfully capture images from the PCR and direct them to the RVSs.(ABSTRACT TRUNCATED AT 250 WORDS)

  20. Secure Video Surveillance System Acquisition Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2009-12-04

    The SVSS Acquisition Software collects and displays video images from two cameras through a VPN, and store the images onto a collection controller. The software is configured to allow a user to enter a time window to display up to 2 1/2, hours of video review. The software collects images from the cameras at a rate of 1 image per second and automatically deletes images older than 3 hours. The software code operates in a linux environment and can be run in a virtual machine on Windows XP. The Sandia software integrates the different COTS software together to build themore » video review system.« less

  1. Conference on Space and Military Applications of Automation and Robotics

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Topics addressed include: robotics; deployment strategies; artificial intelligence; expert systems; sensors and image processing; robotic systems; guidance, navigation, and control; aerospace and missile system manufacturing; and telerobotics.

  2. Automating High-Precision X-Ray and Neutron Imaging Applications with Robotics

    DOE PAGES

    Hashem, Joseph Anthony; Pryor, Mitch; Landsberger, Sheldon; ...

    2017-03-28

    Los Alamos National Laboratory and the University of Texas at Austin recently implemented a robotically controlled nondestructive testing (NDT) system for X-ray and neutron imaging. This system is intended to address the need for accurate measurements for a variety of parts and, be able to track measurement geometry at every imaging location, and is designed for high-throughput applications. This system was deployed in a beam port at a nuclear research reactor and in an operational inspection X-ray bay. The nuclear research reactor system consisted of a precision industrial seven-axis robot, 1.1-MW TRIGA research reactor, and a scintillator-mirror-camera-based imaging system. Themore » X-ray bay system incorporated the same robot, a 225-keV microfocus X-ray source, and a custom flat panel digital detector. The robotic positioning arm is programmable and allows imaging in multiple configurations, including planar, cylindrical, as well as other user defined geometries that provide enhanced engineering evaluation capability. The imaging acquisition device is coupled with the robot for automated image acquisition. The robot can achieve target positional repeatability within 17 μm in the 3-D space. Flexible automation with nondestructive imaging saves costs, reduces dosage, adds imaging techniques, and achieves better quality results in less time. Specifics regarding the robotic system and imaging acquisition and evaluation processes are presented. In conclusion, this paper reviews the comprehensive testing and system evaluation to affirm the feasibility of robotic NDT, presents the system configuration, and reviews results for both X-ray and neutron radiography imaging applications.« less

  3. Automating High-Precision X-Ray and Neutron Imaging Applications with Robotics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hashem, Joseph Anthony; Pryor, Mitch; Landsberger, Sheldon

    Los Alamos National Laboratory and the University of Texas at Austin recently implemented a robotically controlled nondestructive testing (NDT) system for X-ray and neutron imaging. This system is intended to address the need for accurate measurements for a variety of parts and, be able to track measurement geometry at every imaging location, and is designed for high-throughput applications. This system was deployed in a beam port at a nuclear research reactor and in an operational inspection X-ray bay. The nuclear research reactor system consisted of a precision industrial seven-axis robot, 1.1-MW TRIGA research reactor, and a scintillator-mirror-camera-based imaging system. Themore » X-ray bay system incorporated the same robot, a 225-keV microfocus X-ray source, and a custom flat panel digital detector. The robotic positioning arm is programmable and allows imaging in multiple configurations, including planar, cylindrical, as well as other user defined geometries that provide enhanced engineering evaluation capability. The imaging acquisition device is coupled with the robot for automated image acquisition. The robot can achieve target positional repeatability within 17 μm in the 3-D space. Flexible automation with nondestructive imaging saves costs, reduces dosage, adds imaging techniques, and achieves better quality results in less time. Specifics regarding the robotic system and imaging acquisition and evaluation processes are presented. In conclusion, this paper reviews the comprehensive testing and system evaluation to affirm the feasibility of robotic NDT, presents the system configuration, and reviews results for both X-ray and neutron radiography imaging applications.« less

  4. Multileaf collimator performance monitoring and improvement using semiautomated quality control testing and statistical process control.

    PubMed

    Létourneau, Daniel; Wang, An; Amin, Md Nurul; Pearce, Jim; McNiven, Andrea; Keller, Harald; Norrlinger, Bernhard; Jaffray, David A

    2014-12-01

    High-quality radiation therapy using highly conformal dose distributions and image-guided techniques requires optimum machine delivery performance. In this work, a monitoring system for multileaf collimator (MLC) performance, integrating semiautomated MLC quality control (QC) tests and statistical process control tools, was developed. The MLC performance monitoring system was used for almost a year on two commercially available MLC models. Control charts were used to establish MLC performance and assess test frequency required to achieve a given level of performance. MLC-related interlocks and servicing events were recorded during the monitoring period and were investigated as indicators of MLC performance variations. The QC test developed as part of the MLC performance monitoring system uses 2D megavoltage images (acquired using an electronic portal imaging device) of 23 fields to determine the location of the leaves with respect to the radiation isocenter. The precision of the MLC performance monitoring QC test and the MLC itself was assessed by detecting the MLC leaf positions on 127 megavoltage images of a static field. After initial calibration, the MLC performance monitoring QC test was performed 3-4 times/week over a period of 10-11 months to monitor positional accuracy of individual leaves for two different MLC models. Analysis of test results was performed using individuals control charts per leaf with control limits computed based on the measurements as well as two sets of specifications of ± 0.5 and ± 1 mm. Out-of-specification and out-of-control leaves were automatically flagged by the monitoring system and reviewed monthly by physicists. MLC-related interlocks reported by the linear accelerator and servicing events were recorded to help identify potential causes of nonrandom MLC leaf positioning variations. The precision of the MLC performance monitoring QC test and the MLC itself was within ± 0.22 mm for most MLC leaves and the majority of the apparent leaf motion was attributed to beam spot displacements between irradiations. The MLC QC test was performed 193 and 162 times over the monitoring period for the studied units and recalibration had to be repeated up to three times on one of these units. For both units, rate of MLC interlocks was moderately associated with MLC servicing events. The strongest association with the MLC performance was observed between the MLC servicing events and the total number of out-of-control leaves. The average elapsed time for which the number of out-of-specification or out-of-control leaves was within a given performance threshold was computed and used to assess adequacy of MLC test frequency. A MLC performance monitoring system has been developed and implemented to acquire high-quality QC data at high frequency. This is enabled by the relatively short acquisition time for the images and automatic image analysis. The monitoring system was also used to record and track the rate of MLC-related interlocks and servicing events. MLC performances for two commercially available MLC models have been assessed and the results support monthly test frequency for widely accepted ± 1 mm specifications. Higher QC test frequency is however required to maintain tighter specification and in-control behavior.

  5. Machine vision for digital microfluidics

    NASA Astrophysics Data System (ADS)

    Shin, Yong-Jun; Lee, Jeong-Bong

    2010-01-01

    Machine vision is widely used in an industrial environment today. It can perform various tasks, such as inspecting and controlling production processes, that may require humanlike intelligence. The importance of imaging technology for biological research or medical diagnosis is greater than ever. For example, fluorescent reporter imaging enables scientists to study the dynamics of gene networks with high spatial and temporal resolution. Such high-throughput imaging is increasingly demanding the use of machine vision for real-time analysis and control. Digital microfluidics is a relatively new technology with expectations of becoming a true lab-on-a-chip platform. Utilizing digital microfluidics, only small amounts of biological samples are required and the experimental procedures can be automatically controlled. There is a strong need for the development of a digital microfluidics system integrated with machine vision for innovative biological research today. In this paper, we show how machine vision can be applied to digital microfluidics by demonstrating two applications: machine vision-based measurement of the kinetics of biomolecular interactions and machine vision-based droplet motion control. It is expected that digital microfluidics-based machine vision system will add intelligence and automation to high-throughput biological imaging in the future.

  6. Multiple-scanning-probe tunneling microscope with nanoscale positional recognition function.

    PubMed

    Higuchi, Seiji; Kuramochi, Hiromi; Laurent, Olivier; Komatsubara, Takashi; Machida, Shinichi; Aono, Masakazu; Obori, Kenichi; Nakayama, Tomonobu

    2010-07-01

    Over the past decade, multiple-scanning-probe microscope systems with independently controlled probes have been developed for nanoscale electrical measurements. We developed a quadruple-scanning-probe tunneling microscope (QSPTM) that can determine and control the probe position through scanning-probe imaging. The difficulty of operating multiple probes with submicrometer precision drastically increases with the number of probes. To solve problems such as determining the relative positions of the probes and avoiding of contact between the probes, we adopted sample-scanning methods to obtain four images simultaneously and developed an original control system for QSPTM operation with a function of automatic positional recognition. These improvements make the QSPTM a more practical and useful instrument since four images can now be reliably produced, and consequently the positioning of the four probes becomes easier owing to the reduced chance of accidental contact between the probes.

  7. Study of nanometer-level precise phase-shift system used in electronic speckle shearography and phase-shift pattern interferometry

    NASA Astrophysics Data System (ADS)

    Jing, Chao; Liu, Zhongling; Zhou, Ge; Zhang, Yimo

    2011-11-01

    The nanometer-level precise phase-shift system is designed to realize the phase-shift interferometry in electronic speckle shearography pattern interferometry. The PZT is used as driving component of phase-shift system and translation component of flexure hinge is developed to realize micro displacement of non-friction and non-clearance. Closed-loop control system is designed for high-precision micro displacement, in which embedded digital control system is developed for completing control algorithm and capacitive sensor is used as feedback part for measuring micro displacement in real time. Dynamic model and control model of the nanometer-level precise phase-shift system is analyzed, and high-precision micro displacement is realized with digital PID control algorithm on this basis. It is proved with experiments that the location precision of the precise phase-shift system to step signal of displacement is less than 2nm and the location precision to continuous signal of displacement is less than 5nm, which is satisfied with the request of the electronic speckle shearography and phase-shift pattern interferometry. The stripe images of four-step phase-shift interferometry and the final phase distributed image correlated with distortion of objects are listed in this paper to prove the validity of nanometer-level precise phase-shift system.

  8. Improved Performance Characteristics For Indium Antimonide Photovoltaic Detector Arrays Using A FET-Switched Multiplexing Technique

    NASA Astrophysics Data System (ADS)

    Ma, Yung-Lung; Ma, Chialo

    1987-03-01

    In this paper An Acoustic Imaging Recognition System (AIRS) will be introduced which is installed on an Intelligent Robotic System and can recognize different type of Hand tools' by Dynamic pattern recognition. The dynamic pattern recognition is approached by look up table method in this case, the method can save a lot of calculation time and it is practicable. The Acoustic Imaging Recognition System (AIRS) is consist of four parts _ position control unit, pulse-echo signal processing unit, pattern recognition unit and main control unit. The position control of AIRS can rotate an angle of ±5 degree Horizental and Vertical seperately, the purpose of rotation is to find the maximum reflection intensity area, from the distance, angles and intensity of the target we can decide the characteristic of this target, of course all the decision is target, of course all the decision is processed by the main control unit. In Pulse-Echo Signal Process Unit, we utilize the correlation method, to overcome the limitation of short burst of ultrasonic, because the Correlation system can transmit large time bandwidth signals and obtain their resolution and increased intensity through pulse compression in the correlation receiver. The output of correlator is sampled and transfer into digital data by p law coding method, and this data together with delay time T, angle information eH, eV will be sent into main control unit for further analysis. The recognition process in this paper, we use dynamic look up table method, in this method at first we shall set up serval recognition pattern table and then the new pattern scanned by Transducer array will be devided into serval stages and compare with the sampling table. The comparison is implemented by dynamic programing and Markovian process. All the hardware control signals, such as optimum delay time for correlator receiver, horizental and vertical rotation angle for transducer plate, are controlled by the Main Control Unit, the Main Control Unit also handles the pattern recognition process. The distance from the target to the transducer plate is limitted by the power and beam angle of transducer elements, in this AIRS Models, we use a narrow beam transducer and it's input voltage is 50V p-p. A Robot equipped with AIRS can not only measure the distance from the target but also recognize a three dimensional image of target from the image lab of Robot memory. Indexitems, Accoustic System, Supersonic transducer, Dynamic programming, Look-up-table, Image process, pattern Recognition, Quad Tree, Quadappoach.

  9. Design and simulation of a sensor for heliostat field closed loop control

    NASA Astrophysics Data System (ADS)

    Collins, Mike; Potter, Daniel; Burton, Alex

    2017-06-01

    Significant research has been completed in pursuit of capital cost reductions for heliostats [1],[2]. The camera array closed loop control concept has potential to radically alter the way heliostats are controlled and installed by replacing high quality open loop targeting systems with low quality targeting devices that rely on measurement of image position to remove tracking errors during operation. Although the system could be used for any heliostat size, the system significantly benefits small heliostats by reducing actuation costs, enabling large numbers of heliostats to be calibrated simultaneously, and enabling calibration of heliostats that produce low irradiance (similar or less than ambient light images) on Lambertian calibration targets, such as small heliostats that are far from the tower. A simulation method for the camera array has been designed and verified experimentally. The simulation tool demonstrates that closed loop calibration or control is possible using this device.

  10. Preliminary Electrical Designs for CTEX and AFIT Satellite Ground Station

    DTIC Science & Technology

    2010-03-01

    with additional IO High-Speed Piezo Tip/Tilt Platforms S-340 Platform Recommended Models Mirror Aluminum Aluminum S-340.Ax Invar Zerodur glass S-340...developed by RC Optics that uses internal steer- able mirrors that point the optics without slewing the entire instrument. The imaging system is composed of...Determination System Telescope Assembly CTEx Imaging System DCCU Camera Motor/Encoder Assemby FSM & Control Electronics Dwell Mirror w/ 2

  11. Color Image Processing and Object Tracking System

    NASA Technical Reports Server (NTRS)

    Klimek, Robert B.; Wright, Ted W.; Sielken, Robert S.

    1996-01-01

    This report describes a personal computer based system for automatic and semiautomatic tracking of objects on film or video tape, developed to meet the needs of the Microgravity Combustion and Fluids Science Research Programs at the NASA Lewis Research Center. The system consists of individual hardware components working under computer control to achieve a high degree of automation. The most important hardware components include 16-mm and 35-mm film transports, a high resolution digital camera mounted on a x-y-z micro-positioning stage, an S-VHS tapedeck, an Hi8 tapedeck, video laserdisk, and a framegrabber. All of the image input devices are remotely controlled by a computer. Software was developed to integrate the overall operation of the system including device frame incrementation, grabbing of image frames, image processing of the object's neighborhood, locating the position of the object being tracked, and storing the coordinates in a file. This process is performed repeatedly until the last frame is reached. Several different tracking methods are supported. To illustrate the process, two representative applications of the system are described. These applications represent typical uses of the system and include tracking the propagation of a flame front and tracking the movement of a liquid-gas interface with extremely poor visibility.

  12. Simultaneous dual-band radar development

    NASA Technical Reports Server (NTRS)

    Liskow, C. L.

    1974-01-01

    Efforts to design and construct an airborne imaging radar operating simultaneously at L band and X band with an all-inertial navigation system in order to form a dual-band radar system are described. The areas of development include duplex transmitters, receivers, and recorders, a control module, motion compensation for both bands, and adaptation of a commercial inertial navigation system. Installation of the system in the aircraft and flight tests are described. Circuit diagrams, performance figures, and some radar images are presented.

  13. A fiber-compatible spectrally encoded imaging system using a 45° tilted fiber grating

    NASA Astrophysics Data System (ADS)

    Wang, Guoqing; Wang, Chao; Yan, Zhijun; Zhang, Lin

    2016-04-01

    We propose and demonstrate, for the first time to our best knowledge, the use of a 45° tilted fiber grating (TFG) as an infiber lateral diffraction element in an efficient and fiber-compatible spectrally encoded imaging (SEI) system. Under proper polarization control, the TFG has significantly enhanced diffraction efficiency (93.5%) due to strong tilted reflection. Our conceptually new fiber-topics-based design eliminates the need for bulky and lossy free-space diffraction gratings, significantly reduces the volume and cost of the imaging system, improves energy efficiency, and increases system stability. As a proof-of-principle experiment, we use the proposed system to perform an one dimensional (1D) line scan imaging of a customer-designed three-slot sample and the results show that the constructed image matches well with the actual sample. The angular dispersion of the 45° TFG is measured to be 0.054°/nm and the lateral resolution of the SEI system is measured to be 28 μm in our experiment.

  14. Traffic Monitor

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Intelligent Vision Systems, Inc. (InVision) needed image acquisition technology that was reliable in bad weather for its TDS-200 Traffic Detection System. InVision researchers used information from NASA Tech Briefs and assistance from Johnson Space Center to finish the system. The NASA technology used was developed for Earth-observing imaging satellites: charge coupled devices, in which silicon chips convert light directly into electronic or digital images. The TDS-200 consists of sensors mounted above traffic on poles or span wires, enabling two sensors to view an intersection; a "swing and sway" feature to compensate for movement of the sensors; a combination of electronic shutter and gain control; and sensor output to an image digital signal processor, still frame video and optionally live video.

  15. Imaging and full-length biometry of the eye during accommodation using spectral domain OCT with an optical switch

    PubMed Central

    Ruggeri, Marco; Uhlhorn, Stephen R.; De Freitas, Carolina; Ho, Arthur; Manns, Fabrice; Parel, Jean-Marie

    2012-01-01

    Abstract: An optical switch was implemented in the reference arm of an extended depth SD-OCT system to sequentially acquire OCT images at different depths into the eye ranging from the cornea to the retina. A custom-made accommodation module was coupled with the delivery of the OCT system to provide controlled step stimuli of accommodation and disaccommodation that preserve ocular alignment. The changes in the lens shape were imaged and ocular distances were dynamically measured during accommodation and disaccommodation. The system is capable of dynamic in vivo imaging of the entire anterior segment and eye-length measurement during accommodation in real-time. PMID:22808424

  16. Imaging and full-length biometry of the eye during accommodation using spectral domain OCT with an optical switch.

    PubMed

    Ruggeri, Marco; Uhlhorn, Stephen R; De Freitas, Carolina; Ho, Arthur; Manns, Fabrice; Parel, Jean-Marie

    2012-07-01

    An optical switch was implemented in the reference arm of an extended depth SD-OCT system to sequentially acquire OCT images at different depths into the eye ranging from the cornea to the retina. A custom-made accommodation module was coupled with the delivery of the OCT system to provide controlled step stimuli of accommodation and disaccommodation that preserve ocular alignment. The changes in the lens shape were imaged and ocular distances were dynamically measured during accommodation and disaccommodation. The system is capable of dynamic in vivo imaging of the entire anterior segment and eye-length measurement during accommodation in real-time.

  17. Compact Microscope Imaging System Developed

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2001-01-01

    The Compact Microscope Imaging System (CMIS) is a diagnostic tool with intelligent controls for use in space, industrial, medical, and security applications. The CMIS can be used in situ with a minimum amount of user intervention. This system, which was developed at the NASA Glenn Research Center, can scan, find areas of interest, focus, and acquire images automatically. Large numbers of multiple cell experiments require microscopy for in situ observations; this is only feasible with compact microscope systems. CMIS is a miniature machine vision system that combines intelligent image processing with remote control capabilities. The software also has a user-friendly interface that can be used independently of the hardware for post-experiment analysis. CMIS has potential commercial uses in the automated online inspection of precision parts, medical imaging, security industry (examination of currency in automated teller machines and fingerprint identification in secure entry locks), environmental industry (automated examination of soil/water samples), biomedical field (automated blood/cell analysis), and microscopy community. CMIS will improve research in several ways: It will expand the capabilities of MSD experiments utilizing microscope technology. It may be used in lunar and Martian experiments (Rover Robot). Because of its reduced size, it will enable experiments that were not feasible previously. It may be incorporated into existing shuttle orbiter and space station experiments, including glove-box-sized experiments as well as ground-based experiments.

  18. Small Interactive Image Processing System (SMIPS) system description

    NASA Technical Reports Server (NTRS)

    Moik, J. G.

    1973-01-01

    The Small Interactive Image Processing System (SMIPS) operates under control of the IBM-OS/MVT operating system and uses an IBM-2250 model 1 display unit as interactive graphic device. The input language in the form of character strings or attentions from keys and light pen is interpreted and causes processing of built-in image processing functions as well as execution of a variable number of application programs kept on a private disk file. A description of design considerations is given and characteristics, structure and logic flow of SMIPS are summarized. Data management and graphic programming techniques used for the interactive manipulation and display of digital pictures are also discussed.

  19. A new three-dimensional nonscanning laser imaging system based on the illumination pattern of a point-light-source array

    NASA Astrophysics Data System (ADS)

    Xia, Wenze; Ma, Yayun; Han, Shaokun; Wang, Yulin; Liu, Fei; Zhai, Yu

    2018-06-01

    One of the most important goals of research on three-dimensional nonscanning laser imaging systems is the improvement of the illumination system. In this paper, a new three-dimensional nonscanning laser imaging system based on the illumination pattern of a point-light-source array is proposed. This array is obtained using a fiber array connected to a laser array with each unit laser having independent control circuits. This system uses a point-to-point imaging process, which is realized using the exact corresponding optical relationship between the point-light-source array and a linear-mode avalanche photodiode array detector. The complete working process of this system is explained in detail, and the mathematical model of this system containing four equations is established. A simulated contrast experiment and two real contrast experiments which use the simplified setup without a laser array are performed. The final results demonstrate that unlike a conventional three-dimensional nonscanning laser imaging system, the proposed system meets all the requirements of an eligible illumination system. Finally, the imaging performance of this system is analyzed under defocusing situations, and analytical results show that the system has good defocusing robustness and can be easily adjusted in real applications.

  20. Remote imaging laser-induced breakdown spectroscopy and laser-induced fluorescence spectroscopy using nanosecond pulses from a mobile lidar system.

    PubMed

    Grönlund, Rasmus; Lundqvist, Mats; Svanberg, Sune

    2006-08-01

    A mobile lidar system was used in remote imaging laser-induced breakdown spectroscopy (LIBS) and laser-induced fluorescence (LIF) experiments. Also, computer-controlled remote ablation of a chosen area was demonstrated, relevant to cleaning of cultural heritage items. Nanosecond frequency-tripled Nd:YAG laser pulses at 355 nm were employed in experiments with a stand-off distance of 60 meters using pulse energies of up to 170 mJ. By coaxial transmission and common folding of the transmission and reception optical paths using a large computer-controlled mirror, full elemental imaging capability was achieved on composite targets. Different spectral identification algorithms were compared in producing thematic data based on plasma or fluorescence light.

Top