NASA Technical Reports Server (NTRS)
1972-01-01
The IDAPS (Image Data Processing System) is a user-oriented, computer-based, language and control system, which provides a framework or standard for implementing image data processing applications, simplifies set-up of image processing runs so that the system may be used without a working knowledge of computer programming or operation, streamlines operation of the image processing facility, and allows multiple applications to be run in sequence without operator interaction. The control system loads the operators, interprets the input, constructs the necessary parameters for each application, and cells the application. The overlay feature of the IBSYS loader (IBLDR) provides the means of running multiple operators which would otherwise overflow core storage.
NASA Astrophysics Data System (ADS)
Schneider, Uwe; Strack, Ruediger
1992-04-01
apART reflects the structure of an open, distributed environment. According to the general trend in the area of imaging, network-capable, general purpose workstations with capabilities of open system image communication and image input are used. Several heterogeneous components like CCD cameras, slide scanners, and image archives can be accessed. The system is driven by an object-oriented user interface where devices (image sources and destinations), operators (derived from a commercial image processing library), and images (of different data types) are managed and presented uniformly to the user. Browsing mechanisms are used to traverse devices, operators, and images. An audit trail mechanism is offered to record interactive operations on low-resolution image derivatives. These operations are processed off-line on the original image. Thus, the processing of extremely high-resolution raster images is possible, and the performance of resolution dependent operations is enhanced significantly during interaction. An object-oriented database system (APRIL), which can be browsed, is integrated into the system. Attribute retrieval is supported by the user interface. Other essential features of the system include: implementation on top of the X Window System (X11R4) and the OSF/Motif widget set; a SUN4 general purpose workstation, inclusive ethernet, magneto optical disc, etc., as the hardware platform for the user interface; complete graphical-interactive parametrization of all operators; support of different image interchange formats (GIF, TIFF, IIF, etc.); consideration of current IPI standard activities within ISO/IEC for further refinement and extensions.
Pulsed laser linescanner for a backscatter absorption gas imaging system
Kulp, Thomas J.; Reichardt, Thomas A.; Schmitt, Randal L.; Bambha, Ray P.
2004-02-10
An active (laser-illuminated) imaging system is described that is suitable for use in backscatter absorption gas imaging (BAGI). A BAGI imager operates by imaging a scene as it is illuminated with radiation that is absorbed by the gas to be detected. Gases become "visible" in the image when they attenuate the illumination creating a shadow in the image. This disclosure describes a BAGI imager that operates in a linescanned manner using a high repetition rate pulsed laser as its illumination source. The format of this system allows differential imaging, in which the scene is illuminated with light at least 2 wavelengths--one or more absorbed by the gas and one or more not absorbed. The system is designed to accomplish imaging in a manner that is insensitive to motion of the camera, so that it can be held in the hand of an operator or operated from a moving vehicle.
Development of an image operation system with a motion sensor in dental radiology.
Sato, Mitsuru; Ogura, Toshihiro; Yasumoto, Yoshiaki; Kadowaki, Yuta; Hayashi, Norio; Doi, Kunio
2015-07-01
During examinations and/or treatment, a dentist in the examination room needs to view images with a proper display system. However, they cannot operate the image display system by hands, because dentists always wear gloves to be kept their hands away from unsanitized materials. Therefore, we developed a new image operating system that uses a motion sensor. We used the Leap motion sensor technique to read the hand movements of a dentist. We programmed the system using C++ to enable various movements of the display system, i.e., click, double click, drag, and drop. Thus, dentists with their gloves on in the examination room can control dental and panoramic images on the image display system intuitively and quickly with movement of their hands only. We investigated the time required with the conventional method using a mouse and with the new method using the finger operation. The average operation time with the finger method was significantly shorter than that with the mouse method. This motion sensor method, with appropriate training for finger movements, can provide a better operating performance than the conventional mouse method.
A simplification of the fractional Hartley transform applied to image security system in phase
NASA Astrophysics Data System (ADS)
Jimenez, Carlos J.; Vilardy, Juan M.; Perez, Ronal
2017-01-01
In this work we develop a new encryption system for encoded image in phase using the fractional Hartley transform (FrHT), truncation operations and random phase masks (RPMs). We introduce a simplification of the FrHT with the purpose of computing this transform in an efficient and fast way. The security of the encryption system is increased by using nonlinear operations, such as the phase encoding and the truncation operations. The image to encrypt (original image) is encoded in phase and the truncation operations applied in the encryption-decryption system are the amplitude and phase truncations. The encrypted image is protected by six keys, which are the two fractional orders of the FrHTs, the two RPMs and the two pseudorandom code images generated by the amplitude and phase truncation operations. All these keys have to be correct for a proper recovery of the original image in the decryption system. We present digital results that confirm our approach.
He, Longjun; Xu, Lang; Ming, Xing; Liu, Qian
2015-02-01
Three-dimensional post-processing operations on the volume data generated by a series of CT or MR images had important significance on image reading and diagnosis. As a part of the DIOCM standard, WADO service defined how to access DICOM objects on the Web, but it didn't involve three-dimensional post-processing operations on the series images. This paper analyzed the technical features of three-dimensional post-processing operations on the volume data, and then designed and implemented a web service system for three-dimensional post-processing operations of medical images based on the WADO protocol. In order to improve the scalability of the proposed system, the business tasks and calculation operations were separated into two modules. As results, it was proved that the proposed system could support three-dimensional post-processing service of medical images for multiple clients at the same moment, which met the demand of accessing three-dimensional post-processing operations on the volume data on the web.
Imaging System for Vaginal Surgery.
Taylor, G Bernard; Myers, Erinn M
2015-12-01
The vaginal surgeon is challenged with performing complex procedures within a surgical field of limited light and exposure. The video telescopic operating microscope is an illumination and imaging system that provides visualization during open surgical procedures with a limited field of view. The imaging system is positioned within the surgical field and then secured to the operating room table with a maneuverable holding arm. A high-definition camera and Xenon light source allow transmission of the magnified image to a high-definition monitor in the operating room. The monitor screen is positioned above the patient for the surgeon and assistants to view real time throughout the operation. The video telescopic operating microscope system was used to provide surgical illumination and magnification during total vaginal hysterectomy and salpingectomy, midurethral sling, and release of vaginal scar procedures. All procedures were completed without complications. The video telescopic operating microscope provided illumination of the vaginal operative field and display of the magnified image onto high-definition monitors in the operating room for the surgeon and staff to simultaneously view the procedures. The video telescopic operating microscope provides high-definition display, magnification, and illumination during vaginal surgery.
Kychakoff, George [Maple Valley, WA; Afromowitz, Martin A [Mercer Island, WA; Hogle, Richard E [Olympia, WA
2008-10-14
A system for detection and control of deposition on pendant tubes in recovery and power boilers includes one or more deposit monitoring sensors operating in infrared regions of about 4 or 8.7 microns and directly producing images of the interior of the boiler, or producing feeding signals to a data processing system for information to enable a distributed control system by which the boilers are operated to operate said boilers more efficiently. The data processing system includes an image pre-processing circuit in which a 2-D image formed by the video data input is captured, and includes a low pass filter for performing noise filtering of said video input. It also includes an image compensation system for array compensation to correct for pixel variation and dead cells, etc., and for correcting geometric distortion. An image segmentation module receives a cleaned image from the image pre-processing circuit for separating the image of the recovery boiler interior into background, pendant tubes, and deposition. It also accomplishes thresholding/clustering on gray scale/texture and makes morphological transforms to smooth regions, and identifies regions by connected components. An image-understanding unit receives a segmented image sent from the image segmentation module and matches derived regions to a 3-D model of said boiler. It derives a 3-D structure the deposition on pendant tubes in the boiler and provides the information about deposits to the plant distributed control system for more efficient operation of the plant pendant tube cleaning and operating systems.
[The use of an opect optic system in neurosurgical practice].
Kalinovskiy, A V; Rzaev, D A; Yoshimitsu, K
2018-01-01
Modern neurosurgical practice is impossible without access to various information sources. The use of MRI and MSCT data during surgery is an integral part of the neurosurgeon's daily practice. Devices capable of managing an image viewer system without direct contact with equipment simplify working in the operating room. To test operation of a non-contact MRI and MSCT image viewer system in the operating room and to evaluate the system effectiveness. An Opect non-contact image management system developed at the Tokyo Women's Medical University was installed in one of the operating rooms of the Novosibirsk Federal Center of Neurosurgery in 2014. In 2015, the Opect system was used by operating surgeons in 73 surgeries performed in the same operating room. The system effectiveness was analyzed based on a survey of surgeons. The non-contact image viewer system occurred to be easy-to-learn for the personnel to operate this system, easy-to-manage it, and easy-to-present visual information during surgery. Application of the Opect system simplifies work with neuroimaging data during surgery. The surgeon can independently view series of relevant MRI and MSCT scans without any assistance.
Optical coherence tomography using the Niris system in otolaryngology
NASA Astrophysics Data System (ADS)
Rubinstein, Marc; Armstrong, William B.; Djalilian, Hamid R.; Crumley, Roger L.; Kim, Jason H.; Nguyen, Quoc A.; Foulad, Allen I.; Ghasri, Pedram E.; Wong, Brian J. F.
2009-02-01
Objectives: To determine the feasibility and accuracy of the Niris Optical Coherence Tomography (OCT) system in imaging of the mucosal abnormalities of the head and neck. The Niris system is the first commercially available OCT device for applications outside ophthalmology. Methods: We obtained OCT images of benign, premalignant and malignant lesions throughout the head and neck, using the Niris OCT imaging system (Imalux, Cleveland, OH). This imaging system has a tissue penetration depth of approximately 1-2mm, a scanning range of 2mm and a spatial depth resolution of approximately 10-20μm. Imaging was performed in the outpatient setting and in the operating room using a flexible probe. Results: High-resolution cross-sectional images from the oral cavity, nasal cavity, ears and larynx showed distinct layers and structures such as mucosa layer, basal membrane and lamina propria, were clearly identified. In the pathology images disruption of the basal membrane was clearly shown. Device set-up took approximately 5 minutes and the image acquisition was rapid. The system can be operated by the person performing the exam. Conclusions: The Niris system is non invasive and easy to incorporate into the operating room and the clinic. It requires minimal set-up and requires only one person to operate. The unique ability of the OCT offers high-resolution images showing the microanatomy of different sites. OCT imaging with the Niris device potentially offers an efficient, quick and reliable imaging modality in guiding surgical biopsies, intra-operative decision making, and therapeutic options for different otolaryngologic pathologies and premalignant disease.
A frameless stereotaxic operating microscope for neurosurgery.
Friets, E M; Strohbehn, J W; Hatch, J F; Roberts, D W
1989-06-01
A new system, which we call the frameless stereotaxic operating microscope, is discussed. Its purpose is to display CT or other image data in the operating microscope in the correct scale, orientation, and position without the use of a stereotaxic frame. A nonimaging ultrasonic rangefinder allows the position of the operating microscope and the position of the patient to be determined. Discrete fiducial points on the patient's external anatomy are located in both image space and operating room space, linking the image data and the operating room. Physician-selected image information, e.g., tumor contours or guidance to predetermined targets, is projected through the optics of the operating microscope using a miniature cathode ray tube and a beam splitter. Projected images superpose the surgical field, reconstructed from image data to match the focal plane of the operating microscope. The algorithms on which the system is based are described, and the sources and effects of errors are discussed. The system's performance is simulated, providing an estimate of accuracy. Two phantoms are used to measure accuracy experimentally. Clinical results and observations are given.
Operation and performance of the Mars Exploration Rover imaging system on the Martian surface
NASA Technical Reports Server (NTRS)
Maki, Justin N.; Litwin, Todd; Herkenhoff, Ken
2005-01-01
The Imaging System on the Mars Exploration Rovers has successfully operated on the surface of Mars for over one Earth year. An overview of the surface imaging activities is provided, along with a summary of the image data acquired to date.
NASA Technical Reports Server (NTRS)
Fink, Wolfgang (Inventor); Dohm, James (Inventor); Tarbell, Mark A. (Inventor)
2010-01-01
A multi-agent autonomous system for exploration of hazardous or inaccessible locations. The multi-agent autonomous system includes simple surface-based agents or craft controlled by an airborne tracking and command system. The airborne tracking and command system includes an instrument suite used to image an operational area and any craft deployed within the operational area. The image data is used to identify the craft, targets for exploration, and obstacles in the operational area. The tracking and command system determines paths for the surface-based craft using the identified targets and obstacles and commands the craft using simple movement commands to move through the operational area to the targets while avoiding the obstacles. Each craft includes its own instrument suite to collect information about the operational area that is transmitted back to the tracking and command system. The tracking and command system may be further coupled to a satellite system to provide additional image information about the operational area and provide operational and location commands to the tracking and command system.
Dagnino, Giulio; Georgilas, Ioannis; Morad, Samir; Gibbons, Peter; Tarassoli, Payam; Atkins, Roger; Dogramadzi, Sanja
2017-08-01
Joint fractures must be accurately reduced minimising soft tissue damages to avoid negative surgical outcomes. To this regard, we have developed the RAFS surgical system, which allows the percutaneous reduction of intra-articular fractures and provides intra-operative real-time 3D image guidance to the surgeon. Earlier experiments showed the effectiveness of the RAFS system on phantoms, but also key issues which precluded its use in a clinical application. This work proposes a redesign of the RAFS's navigation system overcoming the earlier version's issues, aiming to move the RAFS system into a surgical environment. The navigation system is improved through an image registration framework allowing the intra-operative registration between pre-operative CT images and intra-operative fluoroscopic images of a fractured bone using a custom-made fiducial marker. The objective of the registration is to estimate the relative pose between a bone fragment and an orthopaedic manipulation pin inserted into it intra-operatively. The actual pose of the bone fragment can be updated in real time using an optical tracker, enabling the image guidance. Experiments on phantom and cadavers demonstrated the accuracy and reliability of the registration framework, showing a reduction accuracy (sTRE) of about [Formula: see text] (phantom) and [Formula: see text] (cadavers). Four distal femur fractures were successfully reduced in cadaveric specimens using the improved navigation system and the RAFS system following the new clinical workflow (reduction error [Formula: see text], [Formula: see text]. Experiments showed the feasibility of the image registration framework. It was successfully integrated into the navigation system, allowing the use of the RAFS system in a realistic surgical application.
Biomedical imaging with THz waves
NASA Astrophysics Data System (ADS)
Nguyen, Andrew
2010-03-01
We discuss biomedical imaging using radio waves operating in the terahertz (THz) range between 300 GHz to 3 THz. Particularly, we present the concept for two THz imaging systems. One system employs single antenna, transmitter and receiver operating over multi-THz-frequency simultaneously for sensing and imaging small areas of the human body or biological samples. Another system consists of multiple antennas, a transmitter, and multiple receivers operating over multi-THz-frequency capable of sensing and imaging simultaneously the whole body or large biological samples. Using THz waves for biomedical imaging promises unique and substantial medical benefits including extremely small medical devices, extraordinarily fine spatial resolution, and excellent contrast between images of diseased and healthy tissues. THz imaging is extremely attractive for detection of cancer in the early stages, sensing and imaging of tissues near the skin, and study of disease and its growth versus time.
Hu, Peter F; Xiao, Yan; Ho, Danny; Mackenzie, Colin F; Hu, Hao; Voigt, Roger; Martz, Douglas
2006-06-01
One of the major challenges for day-of-surgery operating room coordination is accurate and timely situation awareness. Distributed and secure real-time status information is key to addressing these challenges. This article reports on the design and implementation of a passive status monitoring system in a 19-room surgical suite of a major academic medical center. Key design requirements considered included integrated real-time operating room status display, access control, security, and network impact. The system used live operating room video images and patient vital signs obtained through monitors to automatically update events and operating room status. Images were presented on a "need-to-know" basis, and access was controlled by identification badge authorization. The system delivered reliable real-time operating room images and status with acceptable network impact. Operating room status was visualized at 4 separate locations and was used continuously by clinicians and operating room service providers to coordinate operating room activities.
Image-guided surgery and therapy: current status and future directions
NASA Astrophysics Data System (ADS)
Peters, Terence M.
2001-05-01
Image-guided surgery and therapy is assuming an increasingly important role, particularly considering the current emphasis on minimally-invasive surgical procedures. Volumetric CT and MR images have been used now for some time in conjunction with stereotactic frames, to guide many neurosurgical procedures. With the development of systems that permit surgical instruments to be tracked in space, image-guided surgery now includes the use of frame-less procedures, and the application of the technology has spread beyond neurosurgery to include orthopedic applications and therapy of various soft-tissue organs such as the breast, prostate and heart. Since tracking systems allow image- guided surgery to be undertaken without frames, a great deal of effort has been spent on image-to-image and image-to- patient registration techniques, and upon the means of combining real-time intra-operative images with images acquired pre-operatively. As image-guided surgery systems have become increasingly sophisticated, the greatest challenges to their successful adoption in the operating room of the future relate to the interface between the user and the system. To date, little effort has been expended to ensure that the human factors issues relating to the use of such equipment in the operating room have been adequately addressed. Such systems will only be employed routinely in the OR when they are designed to be intuitive, unobtrusive, and provide simple access to the source of the images.
Video Information Communication and Retrieval/Image Based Information System (VICAR/IBIS)
NASA Technical Reports Server (NTRS)
Wherry, D. B.
1981-01-01
The acquisition, operation, and planning stages of installing a VICAR/IBIS system are described. The system operates in an IBM mainframe environment, and provides image processing of raster data. System support problems with software and documentation are discussed.
Vasan, S N Swetadri; Ionita, Ciprian N; Titus, A H; Cartwright, A N; Bednarek, D R; Rudin, S
2012-02-23
We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame.
Satellite image collection optimization
NASA Astrophysics Data System (ADS)
Martin, William
2002-09-01
Imaging satellite systems represent a high capital cost. Optimizing the collection of images is critical for both satisfying customer orders and building a sustainable satellite operations business. We describe the functions of an operational, multivariable, time dynamic optimization system that maximizes the daily collection of satellite images. A graphical user interface allows the operator to quickly see the results of what if adjustments to an image collection plan. Used for both long range planning and daily collection scheduling of Space Imaging's IKONOS satellite, the satellite control and tasking (SCT) software allows collection commands to be altered up to 10 min before upload to the satellite.
An infrared/video fusion system for military robotics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, A.W.; Roberts, R.S.
1997-08-05
Sensory information is critical to the telerobotic operation of mobile robots. In particular, visual sensors are a key component of the sensor package on a robot engaged in urban military operations. Visual sensors provide the robot operator with a wealth of information including robot navigation and threat assessment. However, simple countermeasures such as darkness, smoke, or blinding by a laser, can easily neutralize visual sensors. In order to provide a robust visual sensing system, an infrared sensor is required to augment the primary visual sensor. An infrared sensor can acquire useful imagery in conditions that incapacitate a visual sensor. Amore » simple approach to incorporating an infrared sensor into the visual sensing system is to display two images to the operator: side-by-side visual and infrared images. However, dual images might overwhelm the operator with information, and result in degraded robot performance. A better solution is to combine the visual and infrared images into a single image that maximizes scene information. Fusing visual and infrared images into a single image demands balancing the mixture of visual and infrared information. Humans are accustom to viewing and interpreting visual images. They are not accustom to viewing or interpreting infrared images. Hence, the infrared image must be used to enhance the visual image, not obfuscate it.« less
Performance of the dark energy camera liquid nitrogen cooling system
NASA Astrophysics Data System (ADS)
Cease, H.; Alvarez, M.; Alvarez, R.; Bonati, M.; Derylo, G.; Estrada, J.; Flaugher, B.; Flores, R.; Lathrop, A.; Munoz, F.; Schmidt, R.; Schmitt, R. L.; Schultz, K.; Kuhlmann, S.; Zhao, A.
2014-01-01
The Dark Energy Camera, the Imager and its cooling system was installed onto the Blanco 4m telescope at the Cerro Tololo Inter-American Observatory in Chile in September 2012. The imager cooling system is a LN2 two-phase closed loop cryogenic cooling system. The cryogenic circulation processing is located off the telescope. Liquid nitrogen vacuum jacketed transfer lines are run up the outside of the telescope truss tubes to the imager inside the prime focus cage. The design of the cooling system along with commissioning experiences and initial cooling system performance is described. The LN2 cooling system with the DES imager was initially operated at Fermilab for testing, then shipped and tested in the Blanco Coudé room. Now the imager is operating inside the prime focus cage. It is shown that the cooling performance sufficiently cools the imager in a closed loop mode, which can operate for extended time periods without maintenance or LN2 fills.
Image compression system and method having optimized quantization tables
NASA Technical Reports Server (NTRS)
Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)
1998-01-01
A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.
Elliott, Jonathan T; Dsouza, Alisha V; Marra, Kayla; Pogue, Brian W; Roberts, David W; Paulsen, Keith D
2016-09-01
Fluorescence guided surgery has the potential to positively impact surgical oncology; current operating microscopes and stand-alone imaging systems are too insensitive or too cumbersome to maximally take advantage of new tumor-specific agents developed through the microdose pathway. To this end, a custom-built illumination and imaging module enabling picomolar-sensitive near-infrared fluorescence imaging on a commercial operating microscope is described. The limits of detection and system specifications are characterized, and in vivo efficacy of the system in detecting ABY-029 is evaluated in a rat orthotopic glioma model following microdose injections, showing the suitability of the device for microdose phase 0 clinical trials.
Dsouza, Alisha V.; Marra, Kayla; Pogue, Brian W.; Roberts, David W.; Paulsen, Keith D.
2016-01-01
Fluorescence guided surgery has the potential to positively impact surgical oncology; current operating microscopes and stand-alone imaging systems are too insensitive or too cumbersome to maximally take advantage of new tumor-specific agents developed through the microdose pathway. To this end, a custom-built illumination and imaging module enabling picomolar-sensitive near-infrared fluorescence imaging on a commercial operating microscope is described. The limits of detection and system specifications are characterized, and in vivo efficacy of the system in detecting ABY-029 is evaluated in a rat orthotopic glioma model following microdose injections, showing the suitability of the device for microdose phase 0 clinical trials. PMID:27699098
NASA Astrophysics Data System (ADS)
Newswander, T.; Riesland, David W.; Miles, Duane; Reinhart, Lennon
2017-09-01
For space optical systems that image extended scenes such as earth-viewing systems, modulation transfer function (MTF) test data is directly applicable to system optical resolution. For many missions, it is the most direct metric for establishing the best focus of the instrument. Additionally, MTF test products can be combined to predict overall imaging performance. For fixed focus instruments, finding the best focus during ground testing is critical to achieving good imaging performance. The ground testing should account for the full-imaging system, operational parameters, and operational environment. Testing the full-imaging system removes uncertainty caused by breaking configurations and the combination of multiple subassembly test results. For earth viewing, the imaging system needs to be tested at infinite conjugate. Operational environment test conditions should include temperature and vacuum. Optical MTF testing in the presence of operational vibration and gravity release is less straightforward and may not be possible on the ground. Gravity effects are mitigated by testing in multiple orientations. Many space telescope systems are designed and built to have optimum performance in a gravity-free environment. These systems can have imaging performance that is dominated by aberration including astigmatism. This paper discusses how the slanted edge MTF test is applied to determine the best focus of a space optical telescope in ground testing accounting for gravity sag effects. Actual optical system test results and conclusions are presented.
The Limited Duty/Chief Warrant Officer Professional Guidebook
1985-01-01
subsurface imaging . They plan and manage the operation of imaging commands and activities, combat camera groups and aerial reconnaissance imaging...picture and video systems used in aerial, surface and subsurface imaging . They supervise the operation of imaging commands and activities, combat camera
Analysis of Interactive Graphics Display Equipment for an Automated Photo Interpretation System.
1982-06-01
System provides the hardware and software for a range of graphics processor tasks. The IMAGE System employs the RSX- II M real - time operating . system in...One hard copy unit serves up to four work stations. The executive program of the IMAGE system is the DEC RSX- 11 M real - time operating system . In...picture controller. The PDP 11/34 executes programs concurrently under the RSX- I IM real - time operating system . Each graphics program consists of a
High-speed image processing system and its micro-optics application
NASA Astrophysics Data System (ADS)
Ohba, Kohtaro; Ortega, Jesus C. P.; Tanikawa, Tamio; Tanie, Kazuo; Tajima, Kenji; Nagai, Hiroshi; Tsuji, Masataka; Yamada, Shigeru
2003-07-01
In this paper, a new application system with high speed photography, i.e. an observational system for the tele-micro-operation, has been proposed with a dynamic focusing system and a high-speed image processing system using the "Depth From Focus (DFF)" criteria. In micro operation, such as for the microsurgery, DNA operation and etc., the small depth of a focus on the microscope makes bad observation. For example, if the focus is on the object, the actuator cannot be seen with the microscope. On the other hand, if the focus is on the actuator, the object cannot be observed. In this sense, the "all-in-focus image," which holds the in-focused texture all over the image, is useful to observe the microenvironments on the microscope. It is also important to obtain the "depth map" which could show the 3D micro virtual environments in real-time to actuate the micro objects, intuitively. To realize the real-time micro operation with DFF criteria, which has to integrate several images to obtain "all-in-focus image" and "depth map," at least, the 240 frames par second based image capture and processing system should be required. At first, this paper briefly reviews the criteria of "depth from focus" to achieve the all-in-focus image and the 3D microenvironments' reconstruction, simultaneously. After discussing the problem in our past system, a new frame-rate system is constructed with the high-speed video camera and FPGA hardware with 240 frames par second. To apply this system in the real microscope, a new criterion "ghost filtering" technique to reconstruct the all-in-focus image is proposed. Finally, the micro observation shows the validity of this system.
MIRIADS: miniature infrared imaging applications development system description and operation
NASA Astrophysics Data System (ADS)
Baxter, Christopher R.; Massie, Mark A.; McCarley, Paul L.; Couture, Michael E.
2001-10-01
A cooperative effort between the U.S. Air Force Research Laboratory, Nova Research, Inc., the Raytheon Infrared Operations (RIO) and Optics 1, Inc. has successfully produced a miniature infrared camera system that offers significant real-time signal and image processing capabilities by virtue of its modular design. This paper will present an operational overview of the system as well as results from initial testing of the 'Modular Infrared Imaging Applications Development System' (MIRIADS) configured as a missile early-warning detection system. The MIRIADS device can operate virtually any infrared focal plane array (FPA) that currently exists. Programmable on-board logic applies user-defined processing functions to the real-time digital image data for a variety of functions. Daughterboards may be plugged onto the system to expand the digital and analog processing capabilities of the system. A unique full hemispherical infrared fisheye optical system designed and produced by Optics 1, Inc. is utilized by the MIRIADS in a missile warning application to demonstrate the flexibility of the overall system to be applied to a variety of current and future AFRL missions.
Dynamically re-configurable CMOS imagers for an active vision system
NASA Technical Reports Server (NTRS)
Yang, Guang (Inventor); Pain, Bedabrata (Inventor)
2005-01-01
A vision system is disclosed. The system includes a pixel array, at least one multi-resolution window operation circuit, and a pixel averaging circuit. The pixel array has an array of pixels configured to receive light signals from an image having at least one tracking target. The multi-resolution window operation circuits are configured to process the image. Each of the multi-resolution window operation circuits processes each tracking target within a particular multi-resolution window. The pixel averaging circuit is configured to sample and average pixels within the particular multi-resolution window.
Analysis of the development of missile-borne IR imaging detecting technologies
NASA Astrophysics Data System (ADS)
Fan, Jinxiang; Wang, Feng
2017-10-01
Today's infrared imaging guiding missiles are facing many challenges. With the development of targets' stealth, new-style IR countermeasures and penetrating technologies as well as the complexity of the operational environments, infrared imaging guiding missiles must meet the higher requirements of efficient target detection, capability of anti-interference and anti-jamming and the operational adaptability in complex, dynamic operating environments. Missileborne infrared imaging detecting systems are constrained by practical considerations like cost, size, weight and power (SWaP), and lifecycle requirements. Future-generation infrared imaging guiding missiles need to be resilient to changing operating environments and capable of doing more with fewer resources. Advanced IR imaging detecting and information exploring technologies are the key technologies that affect the future direction of IR imaging guidance missiles. Infrared imaging detecting and information exploring technologies research will support the development of more robust and efficient missile-borne infrared imaging detecting systems. Novelty IR imaging technologies, such as Infrared adaptive spectral imaging, are the key to effectively detect, recognize and track target under the complicated operating and countermeasures environments. Innovative information exploring techniques for the information of target, background and countermeasures provided by the detection system is the base for missile to recognize target and counter interference, jamming and countermeasure. Modular hardware and software development is the enabler for implementing multi-purpose, multi-function solutions. Uncooled IRFPA detectors and High-operating temperature IRFPA detectors as well as commercial-off-the-shelf (COTS) technology will support the implementing of low-cost infrared imaging guiding missiles. In this paper, the current status and features of missile-borne IR imaging detecting technologies are summarized. The key technologies and its development trends of missiles' IR imaging detecting technologies are analyzed.
NASA Astrophysics Data System (ADS)
Riviere, Nicolas; Ceolato, Romain; Hespel, Laurent
2014-10-01
Onera, the French aerospace lab, develops and models active imaging systems to understand the relevant physical phenomena affecting these systems performance. As a consequence, efforts have been done on the propagation of a pulse through the atmosphere and on target geometries and surface properties. These imaging systems must operate at night in all ambient illumination and weather conditions in order to perform strategic surveillance for various worldwide operations. We have implemented codes for 2D and 3D laser imaging systems. As we aim to image a scene in the presence of rain, snow, fog or haze, we introduce such light-scattering effects in our numerical models and compare simulated images with measurements provided by commercial laser scanners.
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Chen, Weiwei; Yan, Xinyu; Wang, Yunqian
2018-06-01
In order to obtain higher encryption efficiency, a bit-level quantum color image encryption scheme by exploiting quantum cross-exchange operation and a 5D hyper-chaotic system is designed. Additionally, to enhance the scrambling effect, the quantum channel swapping operation is employed to swap the gray values of corresponding pixels. The proposed color image encryption algorithm has larger key space and higher security since the 5D hyper-chaotic system has more complex dynamic behavior, better randomness and unpredictability than those based on low-dimensional hyper-chaotic systems. Simulations and theoretical analyses demonstrate that the presented bit-level quantum color image encryption scheme outperforms its classical counterparts in efficiency and security.
Image-guided thoracic surgery in the hybrid operation room.
Ujiie, Hideki; Effat, Andrew; Yasufuku, Kazuhiro
2017-01-01
There has been an increase in the use of image-guided technology to facilitate minimally invasive therapy. The next generation of minimally invasive therapy is focused on advancement and translation of novel image-guided technologies in therapeutic interventions, including surgery, interventional pulmonology, radiation therapy, and interventional laser therapy. To establish the efficacy of different minimally invasive therapies, we have developed a hybrid operating room, known as the guided therapeutics operating room (GTx OR) at the Toronto General Hospital. The GTx OR is equipped with multi-modality image-guidance systems, which features a dual source-dual energy computed tomography (CT) scanner, a robotic cone-beam CT (CBCT)/fluoroscopy, high-performance endobronchial ultrasound system, endoscopic surgery system, near-infrared (NIR) fluorescence imaging system, and navigation tracking systems. The novel multimodality image-guidance systems allow physicians to quickly, and accurately image patients while they are on the operating table. This yield improved outcomes since physicians are able to use image guidance during their procedures, and carry out innovative multi-modality therapeutics. Multiple preclinical translational studies pertaining to innovative minimally invasive technology is being developed in our guided therapeutics laboratory (GTx Lab). The GTx Lab is equipped with similar technology, and multimodality image-guidance systems as the GTx OR, and acts as an appropriate platform for translation of research into human clinical trials. Through the GTx Lab, we are able to perform basic research, such as the development of image-guided technologies, preclinical model testing, as well as preclinical imaging, and then translate that research into the GTx OR. This OR allows for the utilization of new technologies in cancer therapy, including molecular imaging, and other innovative imaging modalities, and therefore enables a better quality of life for patients, both during and after the procedure. In this article, we describe capabilities of the GTx systems, and discuss the first-in-human technologies used, and evaluated in GTx OR.
Operation and performance of the mars exploration rover imaging system on the martian surface
Maki, J.N.; Litwin, T.; Schwochert, M.; Herkenhoff, K.
2005-01-01
The Imaging System on the Mars Exploration Rovers has successfully operated on the surface of Mars for over one Earth year. The acquisition of hundreds of panoramas and tens of thousands of stereo pairs has enabled the rovers to explore Mars at a level of detail unprecedented in the history of space exploration. In addition to providing scientific value, the images also play a key role in the daily tactical operation of the rovers. The mobile nature of the MER surface mission requires extensive use of the imaging system for traverse planning, rover localization, remote sensing instrument targeting, and robotic arm placement. Each of these activity types requires a different set of data compression rates, surface coverage, and image acquisition strategies. An overview of the surface imaging activities is provided, along with a summary of the image data acquired to date. ?? 2005 IEEE.
NASA Astrophysics Data System (ADS)
Conard, S. J.; Weaver, H. A.; Núñez, J. I.; Taylor, H. W.; Hayes, J. R.; Cheng, A. F.; Rodgers, D. J.
2017-09-01
The Long-Range Reconnaissance Imager (LORRI) is a high-resolution imaging instrument on the New Horizons spacecraft. LORRI collected over 5000 images during the approach and fly-by of the Pluto system in 2015, including the highest resolution images of Pluto and Charon and the four much smaller satellites (Styx, Nix, Kerberos, and Hydra) near the time of closest approach on 14 July 2015. LORRI is a narrow field of view (0.29°), Ritchey-Chrétien telescope with a 20.8 cm diameter primary mirror and a three-lens field flattener. The telescope has an effective focal length of 262 cm. The focal plane unit consists of a 1024 × 1024 pixel charge-coupled device (CCD) detector operating in frame transfer mode. LORRI provides panchromatic imaging over a bandpass that extends approximately from 350 nm to 850 nm. The instrument operates in an extreme thermal environment, viewing space from within the warm spacecraft. For this reason, LORRI has a silicon carbide optical system with passive thermal control, designed to maintain focus without adjustment over a wide temperature range from -100 C to +50 C. LORRI operated flawlessly throughout the encounter period, providing both science and navigation imaging of the Pluto system. We describe the preparations for the Pluto system encounter, including pre-encounter rehearsals, calibrations, and navigation imaging. In addition, we describe LORRI operations during the encounter, and the resulting imaging performance. Finally, we also briefly describe the post-Pluto encounter imaging of other Kuiper belt objects and the plans for the upcoming encounter with KBO 2014 MU69.
Image data-processing system for solar astronomy
NASA Technical Reports Server (NTRS)
Wilson, R. M.; Teuber, D. L.; Watkins, J. R.; Thomas, D. T.; Cooper, C. M.
1977-01-01
The paper describes an image data processing system (IDAPS), its hardware/software configuration, and interactive and batch modes of operation for the analysis of the Skylab/Apollo Telescope Mount S056 X-Ray Telescope experiment data. Interactive IDAPS is primarily designed to provide on-line interactive user control of image processing operations for image familiarization, sequence and parameter optimization, and selective feature extraction and analysis. Batch IDAPS follows the normal conventions of card control and data input and output, and is best suited where the desired parameters and sequence of operations are known and when long image-processing times are required. Particular attention is given to the way in which this system has been used in solar astronomy and other investigations. Some recent results obtained by means of IDAPS are presented.
Compact wearable dual-mode imaging system for real-time fluorescence image-guided surgery.
Zhu, Nan; Huang, Chih-Yu; Mondal, Suman; Gao, Shengkui; Huang, Chongyuan; Gruev, Viktor; Achilefu, Samuel; Liang, Rongguang
2015-09-01
A wearable all-plastic imaging system for real-time fluorescence image-guided surgery is presented. The compact size of the system is especially suitable for applications in the operating room. The system consists of a dual-mode imaging system, see-through goggle, autofocusing, and auto-contrast tuning modules. The paper will discuss the system design and demonstrate the system performance.
Dedicated mobile high resolution prostate PET imager with an insertable transrectal probe
Majewski, Stanislaw; Proffitt, James
2010-12-28
A dedicated mobile PET imaging system to image the prostate and surrounding organs. The imaging system includes an outside high resolution PET imager placed close to the patient's torso and an insertable and compact transrectal probe that is placed in close proximity to the prostate and operates in conjunction with the outside imager. The two detector systems are spatially co-registered to each other. The outside imager is mounted on an open rotating gantry to provide torso-wide 3D images of the prostate and surrounding tissue and organs. The insertable probe provides closer imaging, high sensitivity, and very high resolution predominately 2D view of the prostate and immediate surroundings. The probe is operated in conjunction with the outside imager and a fast data acquisition system to provide very high resolution reconstruction of the prostate and surrounding tissue and organs.
Imaging, object detection, and change detection with a polarized multistatic GPR array
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, N. Reginald; Paglieroni, David W.
A polarized detection system performs imaging, object detection, and change detection factoring in the orientation of an object relative to the orientation of transceivers. The polarized detection system may operate on one of several modes of operation based on whether the imaging, object detection, or change detection is performed separately for each transceiver orientation. In combined change mode, the polarized detection system performs imaging, object detection, and change detection separately for each transceiver orientation, and then combines changes across polarizations. In combined object mode, the polarized detection system performs imaging and object detection separately for each transceiver orientation, and thenmore » combines objects across polarizations and performs change detection on the result. In combined image mode, the polarized detection system performs imaging separately for each transceiver orientation, and then combines images across polarizations and performs object detection followed by change detection on the result.« less
Real-time image reconstruction and display system for MRI using a high-speed personal computer.
Haishi, T; Kose, K
1998-09-01
A real-time NMR image reconstruction and display system was developed using a high-speed personal computer and optimized for the 32-bit multitasking Microsoft Windows 95 operating system. The system was operated at various CPU clock frequencies by changing the motherboard clock frequency and the processor/bus frequency ratio. When the Pentium CPU was used at the 200 MHz clock frequency, the reconstruction time for one 128 x 128 pixel image was 48 ms and that for the image display on the enlarged 256 x 256 pixel window was about 8 ms. NMR imaging experiments were performed with three fast imaging sequences (FLASH, multishot EPI, and one-shot EPI) to demonstrate the ability of the real-time system. It was concluded that in most cases, high-speed PC would be the best choice for the image reconstruction and display system for real-time MRI. Copyright 1998 Academic Press.
Takeba, Jun; Umakoshi, Kensuke; Kikuchi, Satoshi; Matsumoto, Hironori; Annen, Suguru; Moriyama, Naoki; Nakabayashi, Yuki; Sato, Norio; Aibiki, Mayuki
2018-04-01
Screw fixation for unstable pelvic ring fractures is generally performed using the C-arm. However, some studies reported erroneous piercing with screws, nerve injuries, and vessel injuries. Recent studies have reported the efficacy of screw fixations using navigation systems. The purpose of this retrospective study was to investigate the accuracy of screw fixation using the O-arm ® imaging system and StealthStation ® navigation system for unstable pelvic ring fractures. The participants were 10 patients with unstable pelvic ring fractures, who underwent screw fixations using the O-arm StealthStation navigation system (nine cases with iliosacral screw and one case with lateral compression screw). We investigated operation duration, bleeding during operation, the presence of complications during operation, and the presence of cortical bone perforation by the screws based on postoperative CT scan images. We also measured the difference in screw tip positions between intraoperative navigation screen shot images and postoperative CT scan images. The average operation duration was 71 min, average bleeding was 12 ml, and there were no nerve or vessel injuries during the operation. There was no cortical bone perforation by the screws. The average difference between intraoperative navigation images and postoperative CT images was 2.5 ± 0.9 mm, for all 18 screws used in this study. Our results suggest that the O-arm StealthStation navigation system provides accurate screw fixation for unstable pelvic ring fractures.
Recent progress of push-broom infrared hyper-spectral imager in SITP
NASA Astrophysics Data System (ADS)
Wang, Yueming; Hu, Weida; Shu, Rong; Li, Chunlai; Yuan, Liyin; Wang, Jianyu
2017-02-01
In the past decades, hyper-spectral imaging technologies were well developed in SITP, CAS. Many innovations for system design and key parts of hyper-spectral imager were finished. First airborne hyper-spectral imager operating from VNIR to TIR in the world was emerged in SITP. It is well known as OMIS(Operational Modular Imaging Spectrometer). Some new technologies were introduced to improve the performance of hyper-spectral imaging system in these years. A high spatial space-borne hyper-spectral imager aboard Tiangong-1 spacecraft was launched on Sep.29, 2011. Thanks for ground motion compensation and high optical efficiency prismatic spectrometer, a large amount of hyper-spectral imagery with high sensitivity and good quality were acquired in the past years. Some important phenomena were observed. To diminish spectral distortion and expand field of view, new type of prismatic imaging spectrometer based curved prism were proposed by SITP. A prototype of hyper-spectral imager based spherical fused silica prism were manufactured, which can operate from 400nm 2500nm. We also made progress in the development of LWIR hyper-spectral imaging technology. Compact and low F number LWIR imaging spectrometer was designed, manufactured and integrated. The spectrometer operated in a cryogenically-cooled vacuum box for background radiation restraint. The system performed well during flight experiment in an airborne platform. Thanks high sensitivity FPA and high performance optics, spatial resolution and spectral resolution and SNR of system are improved enormously. However, more work should be done for high radiometric accuracy in the future.
[Bone drilling simulation by three-dimensional imaging].
Suto, Y; Furuhata, K; Kojima, T; Kurokawa, T; Kobayashi, M
1989-06-01
The three-dimensional display technique has a wide range of medical applications. Pre-operative planning is one typical application: in orthopedic surgery, three-dimensional image processing has been used very successfully. We have employed this technique in pre-operative planning for orthopedic surgery, and have developed a simulation system for bone-drilling. Positive results were obtained by pre-operative rehearsal; when a region of interest is indicated by means of a mouse on the three-dimensional image displayed on the CRT, the corresponding region appears on the slice image which is displayed simultaneously. Consequently, the status of the bone-drilling is constantly monitored. In developing this system, we have placed emphasis on the quality of the reconstructed three-dimensional images, on fast processing, and on the easy operation of the surgical planning simulation.
Features and limitations of mobile tablet devices for viewing radiological images.
Grunert, J H
2015-03-01
Mobile radiological image display systems are becoming increasingly common, necessitating a comparison of the features of these systems, specifically the operating system employed, connection to stationary PACS, data security and rang of image display and image analysis functions. In the fall of 2013, a total of 17 PACS suppliers were surveyed regarding the technical features of 18 mobile radiological image display systems using a standardized questionnaire. The study also examined to what extent the technical specifications of the mobile image display systems satisfy the provisions of the Germany Medical Devices Act as well as the provisions of the German X-ray ordinance (RöV). There are clear differences in terms of how the mobile systems connected to the stationary PACS. Web-based solutions allow the mobile image display systems to function independently of their operating systems. The examined systems differed very little in terms of image display and image analysis functions. Mobile image display systems complement stationary PACS and can be used to view images. The impacts of the new quality assurance guidelines (QS-RL) as well as the upcoming new standard DIN 6868 - 157 on the acceptance testing of mobile image display units for the purpose of image evaluation are discussed. © Georg Thieme Verlag KG Stuttgart · New York.
Ehsan, Shoaib; Clark, Adrian F.; ur Rehman, Naveed; McDonald-Maier, Klaus D.
2015-01-01
The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211
Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D
2015-07-10
The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.
Kychakoff, George; Afromowitz, Martin A; Hugle, Richard E
2005-06-21
A system for detection and control of deposition on pendant tubes in recovery and power boilers includes one or more deposit monitoring sensors operating in infrared regions and about 4 or 8.7 microns and directly producing images of the interior of the boiler. An image pre-processing circuit (95) in which a 2-D image formed by the video data input is captured, and includes a low pass filter for performing noise filtering of said video input. An image segmentation module (105) for separating the image of the recovery boiler interior into background, pendant tubes, and deposition. An image-understanding unit (115) matches derived regions to a 3-D model of said boiler. It derives a 3-D structure the deposition on pendant tubes in the boiler and provides the information about deposits to the plant distributed control system (130) for more efficient operation of the plant pendant tube cleaning and operating systems.
Hybrid architecture active wavefront sensing and control system, and method
NASA Technical Reports Server (NTRS)
Feinberg, Lee D. (Inventor); Dean, Bruce H. (Inventor); Hyde, Tristram T. (Inventor)
2011-01-01
According to various embodiments, provided herein is an optical system and method that can be configured to perform image analysis. The optical system can comprise a telescope assembly and one or more hybrid instruments. The one or more hybrid instruments can be configured to receive image data from the telescope assembly and perform a fine guidance operation and a wavefront sensing operation, simultaneously, on the image data received from the telescope assembly.
47 CFR 15.509 - Technical requirements for ground penetrating radars and wall imaging systems.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 1 2011-10-01 2011-10-01 false Technical requirements for ground penetrating radars and wall imaging systems. 15.509 Section 15.509 Telecommunication FEDERAL COMMUNICATIONS... ground penetrating radars and wall imaging systems. (a) The UWB bandwidth of an imaging system operating...
47 CFR 15.509 - Technical requirements for ground penetrating radars and wall imaging systems.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 1 2013-10-01 2013-10-01 false Technical requirements for ground penetrating radars and wall imaging systems. 15.509 Section 15.509 Telecommunication FEDERAL COMMUNICATIONS... ground penetrating radars and wall imaging systems. (a) The UWB bandwidth of an imaging system operating...
47 CFR 15.509 - Technical requirements for ground penetrating radars and wall imaging systems.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 1 2012-10-01 2012-10-01 false Technical requirements for ground penetrating radars and wall imaging systems. 15.509 Section 15.509 Telecommunication FEDERAL COMMUNICATIONS... ground penetrating radars and wall imaging systems. (a) The UWB bandwidth of an imaging system operating...
47 CFR 15.509 - Technical requirements for ground penetrating radars and wall imaging systems.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 1 2014-10-01 2014-10-01 false Technical requirements for ground penetrating radars and wall imaging systems. 15.509 Section 15.509 Telecommunication FEDERAL COMMUNICATIONS... ground penetrating radars and wall imaging systems. (a) The UWB bandwidth of an imaging system operating...
47 CFR 15.509 - Technical requirements for ground penetrating radars and wall imaging systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Technical requirements for ground penetrating radars and wall imaging systems. 15.509 Section 15.509 Telecommunication FEDERAL COMMUNICATIONS... ground penetrating radars and wall imaging systems. (a) The UWB bandwidth of an imaging system operating...
NASA Astrophysics Data System (ADS)
Sinha, V.; Srivastava, A.; Lee, H. K.; Liu, X.
2013-05-01
The successful creation and operation of a neutron and X-ray combined computed tomography (NXCT) system has been demonstrated by researchers at the Missouri University of Science and Technology. The NXCT system has numerous applications in the field of material characterization and object identification in materials with a mixture of atomic numbers represented. Presently, the feasibility studies have been performed for explosive detection and homeland security applications, particularly in concealed material detection and determination of the light atomic number materials. These materials cannot be detected using traditional X-ray imaging. The new system has the capability to provide complete structural and compositional information due to the complementary nature of X-ray and neutron interactions with materials. The design of the NXCT system facilitates simultaneous and instantaneous imaging operation, promising enhanced detection capabilities of explosive materials, low atomic number materials and illicit materials for homeland security applications. In addition, a sample positioning system allowing the user to remotely and automatically manipulate the sample makes the system viable for commercial applications. Several explosives and weapon simulants have been imaged and the results are provided. The fusion algorithms which combine the data from the neutron and X-ray imaging produce superior images. This paper is a compete overview of the NXCT system for feasibility studies of explosive detection and homeland security applications. The design of the system, operation, algorithm development, and detection schemes are provided. This is the first combined neutron and X-ray computed tomography system in operation. Furthermore, the method of fusing neutron and X-ray images together is a new approach which provides high contrast images of the desired object. The system could serve as a standardized tool in nondestructive testing of many applications, especially in explosives detection and homeland security research.
Stereoscopic medical imaging collaboration system
NASA Astrophysics Data System (ADS)
Okuyama, Fumio; Hirano, Takenori; Nakabayasi, Yuusuke; Minoura, Hirohito; Tsuruoka, Shinji
2007-02-01
The computerization of the clinical record and the realization of the multimedia have brought improvement of the medical service in medical facilities. It is very important for the patients to obtain comprehensible informed consent. Therefore, the doctor should plainly explain the purpose and the content of the diagnoses and treatments for the patient. We propose and design a Telemedicine Imaging Collaboration System which presents a three dimensional medical image as X-ray CT, MRI with stereoscopic image by using virtual common information space and operating the image from a remote location. This system is composed of two personal computers, two 15 inches stereoscopic parallax barrier type LCD display (LL-151D, Sharp), one 1Gbps router and 1000base LAN cables. The software is composed of a DICOM format data transfer program, an operation program of the images, the communication program between two personal computers and a real time rendering program. Two identical images of 512×768 pixcels are displayed on two stereoscopic LCD display, and both images show an expansion, reduction by mouse operation. This system can offer a comprehensible three-dimensional image of the diseased part. Therefore, the doctor and the patient can easily understand it, depending on their needs.
Compact wearable dual-mode imaging system for real-time fluorescence image-guided surgery
Zhu, Nan; Huang, Chih-Yu; Mondal, Suman; Gao, Shengkui; Huang, Chongyuan; Gruev, Viktor; Achilefu, Samuel; Liang, Rongguang
2015-01-01
Abstract. A wearable all-plastic imaging system for real-time fluorescence image-guided surgery is presented. The compact size of the system is especially suitable for applications in the operating room. The system consists of a dual-mode imaging system, see-through goggle, autofocusing, and auto-contrast tuning modules. The paper will discuss the system design and demonstrate the system performance. PMID:26358823
High-resolution ophthalmic imaging system
Olivier, Scot S.; Carrano, Carmen J.
2007-12-04
A system for providing an improved resolution retina image comprising an imaging camera for capturing a retina image and a computer system operatively connected to the imaging camera, the computer producing short exposures of the retina image and providing speckle processing of the short exposures to provide the improved resolution retina image. The system comprises the steps of capturing a retina image, producing short exposures of the retina image, and speckle processing the short exposures of the retina image to provide the improved resolution retina image.
Low-cost Volumetric Ultrasound by Augmentation of 2D Systems: Design and Prototype.
Herickhoff, Carl D; Morgan, Matthew R; Broder, Joshua S; Dahl, Jeremy J
2018-01-01
Conventional two-dimensional (2D) ultrasound imaging is a powerful diagnostic tool in the hands of an experienced user, yet 2D ultrasound remains clinically underutilized and inherently incomplete, with output being very operator dependent. Volumetric ultrasound systems can more fully capture a three-dimensional (3D) region of interest, but current 3D systems require specialized transducers, are prohibitively expensive for many clinical departments, and do not register image orientation with respect to the patient; these systems are designed to provide improved workflow rather than operator independence. This work investigates whether it is possible to add volumetric 3D imaging capability to existing 2D ultrasound systems at minimal cost, providing a practical means of reducing operator dependence in ultrasound. In this paper, we present a low-cost method to make 2D ultrasound systems capable of quality volumetric image acquisition: we present the general system design and image acquisition method, including the use of a probe-mounted orientation sensor, a simple probe fixture prototype, and an offline volume reconstruction technique. We demonstrate initial results of the method, implemented using a Verasonics Vantage research scanner.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1991-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
Composite video and graphics display for camera viewing systems in robotics and teleoperation
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1993-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
NASA Astrophysics Data System (ADS)
Riviere, Nicolas; Hespel, Laurent; Ceolato, Romain; Drouet, Florence
2011-11-01
Onera, the French Aerospace Lab, develops and models active imaging systems to understand the relevant physical phenomena impacting on their performances. As a consequence, efforts have been done both on the propagation of a pulse through the atmosphere (scintillation and turbulence effects) and, on target geometries and their surface properties (radiometric and speckle effects). But these imaging systems must operate at night in all ambient illuminations and weather conditions in order to perform the strategic surveillance of the environment for various worldwide operations or to perform the enhanced navigation of an aircraft. Onera has implemented codes for 2D and 3D laser imaging systems. As we aim to image a scene even in the presence of rain, snow, fog or haze, Onera introduces such meteorological effects in these numerical models and compares simulated images with measurements provided by commercial imaging systems.
40 CFR 265.71 - Use of manifest system.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., § 265.71 was amended by revising paragraph (a)(2), and by adding paragraphs (f), (g), (h), (i), (j), and... operator, the owner or operator may transmit to the system operator an image file of Page 1 of the manifest, or both a data string file and the image file corresponding to Page 1 of the manifest. Any data or...
2017-09-01
NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS TEST AND EVALUATION OF AN IMAGE-MATCHING NAVIGATION SYSTEM FOR A UAS OPERATING IN A GPS-DENIED...INTENTIONALLY LEFT BLANK ii Approved for public release. Distribution is unlimited. TEST AND EVALUATION OF AN IMAGE-MATCHING NAVIGATION SYSTEM FOR A UAS... Evaluation Setup and Procedures 39 4.1 Test Equipment and Data Collection Procedures . . . . . . . . . . . . 39 4.2 Actual Flight Data Collection
Applications of superconducting bolometers in security imaging
NASA Astrophysics Data System (ADS)
Luukanen, A.; Leivo, M. M.; Rautiainen, A.; Grönholm, M.; Toivanen, H.; Grönberg, L.; Helistö, P.; Mäyrä, A.; Aikio, M.; Grossman, E. N.
2012-12-01
Millimeter-wave (MMW) imaging systems are currently undergoing deployment World-wide for airport security screening applications. Security screening through MMW imaging is facilitated by the relatively good transmission of these wavelengths through common clothing materials. Given the long wavelength of operation (frequencies between 20 GHz to ~ 100 GHz, corresponding to wavelengths between 1.5 cm and 3 mm), existing systems are suited for close-range imaging only due to substantial diffraction effects associated with practical aperture diameters. The present and arising security challenges call for systems that are capable of imaging concealed threat items at stand-off ranges beyond 5 meters at near video frame rates, requiring substantial increase in operating frequency in order to achieve useful spatial resolution. The construction of such imaging systems operating at several hundred GHz has been hindered by the lack of submm-wave low-noise amplifiers. In this paper we summarize our efforts in developing a submm-wave video camera which utilizes cryogenic antenna-coupled microbolometers as detectors. Whilst superconducting detectors impose the use of a cryogenic system, we argue that the resulting back-end complexity increase is a favorable trade-off compared to complex and expensive room temperature submm-wave LNAs both in performance and system cost.
NASA Astrophysics Data System (ADS)
Zhou, Xiaohu; Neubauer, Franz; Zhao, Dong; Xu, Shichao
2015-01-01
The high-precision geometric correction of airborne hyperspectral remote sensing image processing was a hard nut to crack, and conventional methods of remote sensing image processing by selecting ground control points to correct the images are not suitable in the correction process of airborne hyperspectral image. The optical scanning system of an inertial measurement unit combined with differential global positioning system (IMU/DGPS) is introduced to correct the synchronous scanned Operational Modular Imaging Spectrometer II (OMIS II) hyperspectral remote sensing images. Posture parameters, which were synchronized with the OMIS II, were first obtained from the IMU/DGPS. Second, coordinate conversion and flight attitude parameters' calculations were conducted. Third, according to the imaging principle of OMIS II, mathematical correction was applied and the corrected image pixels were resampled. Then, better image processing results were achieved.
Portable concealed weapon detection using millimeter-wave FMCW radar imaging
NASA Astrophysics Data System (ADS)
Johnson, Michael A.; Chang, Yu-Wen
2001-02-01
Unobtrusive detection of concealed weapons on persons or in abandoned bags would provide law enforcement a powerful tool to focus resources and increase traffic throughput in high- risk situations. We have developed a fast image scanning 94 GHz radar system that is suitable for portable operation and remote viewing of radar data. This system includes a novel fast image-scanning antenna that allows for the acquisition of medium resolution 3D millimeter wave images of stationary targets with frame times on order of one second. The 3D radar data allows for potential isolation of concealed weapons from body and environmental clutter such as nearby furniture or other people. The radar is an active system so image quality is not affected indoors, emitted power is however very low so there are no health concerns for operator or targets. The low power operation is still sufficient to penetrate heavy clothing or material. Small system size allows for easy transport and rapid deployment of the system as well as an easy migration path to future hand held systems.
Portable imaging system method and apparatus
Freifeld, Barry M.; Kneafsley, Timothy J.; Pruess, Jacob; Tomutsa, Liviu; Reiter, Paul A.; deCastro, Ted M.
2006-07-25
An operator shielded X-ray imaging system has sufficiently low mass (less than 300 kg) and is compact enough to enable portability by reducing operator shielding requirements to a minimum shielded volume. The resultant shielded volume may require a relatively small mass of shielding in addition to the already integrally shielded X-ray source, intensifier, and detector. The system is suitable for portable imaging of well cores at remotely located well drilling sites. The system accommodates either small samples, or small cross-sectioned objects of unlimited length. By rotating samples relative to the imaging device, the information required for computer aided tomographic reconstruction may be obtained. By further translating the samples relative to the imaging system, fully three dimensional (3D) tomographic reconstructions may be obtained of samples having arbitrary length.
Green's function and image system for the Laplace operator in the prolate spheroidal geometry
NASA Astrophysics Data System (ADS)
Xue, Changfeng; Deng, Shaozhong
2017-01-01
In the present paper, electrostatic image theory is studied for Green's function for the Laplace operator in the case where the fundamental domain is either the exterior or the interior of a prolate spheroid. In either case, an image system is developed to consist of a point image inside the complement of the fundamental domain and an additional symmetric continuous surface image over a confocal prolate spheroid outside the fundamental domain, although the process of calculating such an image system is easier for the exterior than for the interior Green's function. The total charge of the surface image is zero and its centroid is at the origin of the prolate spheroid. In addition, if the source is on the focal axis outside the prolate spheroid, then the image system of the exterior Green's function consists of a point image on the focal axis and a line image on the line segment between the two focal points.
Hard real-time beam scheduler enables adaptive images in multi-probe systems
NASA Astrophysics Data System (ADS)
Tobias, Richard J.
2014-03-01
Real-time embedded-system concepts were adapted to allow an imaging system to responsively control the firing of multiple probes. Large-volume, operator-independent (LVOI) imaging would increase the diagnostic utility of ultrasound. An obstacle to this innovation is the inability of current systems to drive multiple transducers dynamically. Commercial systems schedule scanning with static lists of beams to be fired and processed; here we allow an imager to adapt to changing beam schedule demands, as an intelligent response to incoming image data. An example of scheduling changes is demonstrated with a flexible duplex mode two-transducer application mimicking LVOI imaging. Embedded-system concepts allow an imager to responsively control the firing of multiple probes. Operating systems use powerful dynamic scheduling algorithms, such as fixed priority preemptive scheduling. Even real-time operating systems lack the timing constraints required for ultrasound. Particularly for Doppler modes, events must be scheduled with sub-nanosecond precision, and acquired data is useless without this requirement. A successful scheduler needs unique characteristics. To get close to what would be needed in LVOI imaging, we show two transducers scanning different parts of a subjects leg. When one transducer notices flow in a region where their scans overlap, the system reschedules the other transducer to start flow mode and alter its beams to get a view of the observed vessel and produce a flow measurement. The second transducer does this in a focused region only. This demonstrates key attributes of a successful LVOI system, such as robustness against obstructions and adaptive self-correction.
Range determination for scannerless imaging
Muguira, Maritza Rosa; Sackos, John Theodore; Bradley, Bart Davis; Nellums, Robert
2000-01-01
A new method of operating a scannerless range imaging system (e.g., a scannerless laser radar) has been developed. This method is designed to compensate for nonlinear effects which appear in many real-world components. The system operates by determining the phase shift of the laser modulation, which is a physical quantity related physically to the path length between the laser source and the detector, for each pixel of an image.
Low-Speed Fingerprint Image Capture System User`s Guide, June 1, 1993
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitus, B.R.; Goddard, J.S.; Jatko, W.B.
1993-06-01
The Low-Speed Fingerprint Image Capture System (LS-FICS) uses a Sun workstation controlling a Lenzar ElectroOptics Opacity 1000 imaging system to digitize fingerprint card images to support the Federal Bureau of Investigation`s (FBI`s) Automated Fingerprint Identification System (AFIS) program. The system also supports the operations performed by the Oak Ridge National Laboratory- (ORNL-) developed Image Transmission Network (ITN) prototype card scanning system. The input to the system is a single FBI fingerprint card of the agreed-upon standard format and a user-specified identification number. The output is a file formatted to be compatible with the National Institute of Standards and Technology (NIST)more » draft standard for fingerprint data exchange dated June 10, 1992. These NIST compatible files contain the required print and text images. The LS-FICS is designed to provide the FBI with the capability of scanning fingerprint cards into a digital format. The FBI will replicate the system to generate a data base of test images. The Host Workstation contains the image data paths and the compression algorithm. A local area network interface, disk storage, and tape drive are used for the image storage and retrieval, and the Lenzar Opacity 1000 scanner is used to acquire the image. The scanner is capable of resolving 500 pixels/in. in both x and y directions. The print images are maintained in full 8-bit gray scale and compressed with an FBI-approved wavelet-based compression algorithm. The text fields are downsampled to 250 pixels/in. and 2-bit gray scale. The text images are then compressed using a lossless Huffman coding scheme. The text fields retrieved from the output files are easily interpreted when displayed on the screen. Detailed procedures are provided for system calibration and operation. Software tools are provided to verify proper system operation.« less
Real-time three-dimensional soft tissue reconstruction for laparoscopic surgery.
Kowalczuk, Jędrzej; Meyer, Avishai; Carlson, Jay; Psota, Eric T; Buettner, Shelby; Pérez, Lance C; Farritor, Shane M; Oleynikov, Dmitry
2012-12-01
Accurate real-time 3D models of the operating field have the potential to enable augmented reality for endoscopic surgery. A new system is proposed to create real-time 3D models of the operating field that uses a custom miniaturized stereoscopic video camera attached to a laparoscope and an image-based reconstruction algorithm implemented on a graphics processing unit (GPU). The proposed system was evaluated in a porcine model that approximates the viewing conditions of in vivo surgery. To assess the quality of the models, a synthetic view of the operating field was produced by overlaying a color image on the reconstructed 3D model, and an image rendered from the 3D model was compared with a 2D image captured from the same view. Experiments conducted with an object of known geometry demonstrate that the system produces 3D models accurate to within 1.5 mm. The ability to produce accurate real-time 3D models of the operating field is a significant advancement toward augmented reality in minimally invasive surgery. An imaging system with this capability will potentially transform surgery by helping novice and expert surgeons alike to delineate variance in internal anatomy accurately.
Nickoloff, Edward Lee
2011-01-01
This article reviews the design and operation of both flat-panel detector (FPD) and image intensifier fluoroscopy systems. The different components of each imaging chain and their functions are explained and compared. FPD systems have multiple advantages such as a smaller size, extended dynamic range, no spatial distortion, and greater stability. However, FPD systems typically have the same spatial resolution for all fields of view (FOVs) and are prone to ghosting. Image intensifier systems have better spatial resolution with the use of smaller FOVs (magnification modes) and tend to be less expensive. However, the spatial resolution of image intensifier systems is limited by the television system to which they are coupled. Moreover, image intensifier systems are degraded by glare, vignetting, spatial distortions, and defocusing effects. FPD systems do not have these problems. Some recent innovations to fluoroscopy systems include automated filtration, pulsed fluoroscopy, automatic positioning, dose-area product meters, and improved automatic dose rate control programs. Operator-selectable features may affect both the patient radiation dose and image quality; these selectable features include dose level setting, the FOV employed, fluoroscopic pulse rates, geometric factors, display software settings, and methods to reduce the imaging time. © RSNA, 2011.
Development of online lines-scan imaging system for chicken inspection and differentiation
NASA Astrophysics Data System (ADS)
Yang, Chun-Chieh; Chan, Diane E.; Chao, Kuanglin; Chen, Yud-Ren; Kim, Moon S.
2006-10-01
An online line-scan imaging system was developed for differentiation of wholesome and systemically diseased chickens. The hyperspectral imaging system used in this research can be directly converted to multispectral operation and would provide the ideal implementation of essential features for data-efficient high-speed multispectral classification algorithms. The imaging system consisted of an electron-multiplying charge-coupled-device (EMCCD) camera and an imaging spectrograph for line-scan images. The system scanned the surfaces of chicken carcasses on an eviscerating line at a poultry processing plant in December 2005. A method was created to recognize birds entering and exiting the field of view, and to locate a Region of Interest on the chicken images from which useful spectra were extracted for analysis. From analysis of the difference spectra between wholesome and systemically diseased chickens, four wavelengths of 468 nm, 501 nm, 582 nm and 629 nm were selected as key wavelengths for differentiation. The method of locating the Region of Interest will also have practical application in multispectral operation of the line-scan imaging system for online chicken inspection. This line-scan imaging system makes possible the implementation of multispectral inspection using the key wavelengths determined in this study with minimal software adaptations and without the need for cross-system calibration.
Baum, S.; Sillem, M.; Ney, J. T.; Baum, A.; Friedrich, M.; Radosa, J.; Kramer, K. M.; Gronwald, B.; Gottschling, S.; Solomayer, E. F.; Rody, A.; Joukhadar, R.
2017-01-01
Introduction Minimally invasive operative techniques are being used increasingly in gynaecological surgery. The expansion of the laparoscopic operation spectrum is in part the result of improved imaging. This study investigates the practical advantages of using 3D cameras in routine surgical practice. Materials and Methods Two different 3-dimensional camera systems were compared with a 2-dimensional HD system; the operating surgeonʼs experiences were documented immediately postoperatively using a questionnaire. Results Significant advantages were reported for suturing and cutting of anatomical structures when using the 3D compared to 2D camera systems. There was only a slight advantage for coagulating. The use of 3D cameras significantly improved the general operative visibility and in particular the representation of spacial depth compared to 2-dimensional images. There was not a significant advantage for image width. Depiction of adhesions and retroperitoneal neural structures was significantly improved by the stereoscopic cameras, though this did not apply to blood vessels, ureter, uterus or ovaries. Conclusion 3-dimensional cameras were particularly advantageous for the depiction of fine anatomical structures due to improved spacial depth representation compared to 2D systems. 3D cameras provide the operating surgeon with a monitor image that more closely resembles actual anatomy, thus simplifying laparoscopic procedures. PMID:28190888
STRIPE: Remote Driving Using Limited Image Data
NASA Technical Reports Server (NTRS)
Kay, Jennifer S.
1997-01-01
Driving a vehicle, either directly or remotely, is an inherently visual task. When heavy fog limits visibility, we reduce our car's speed to a slow crawl, even along very familiar roads. In teleoperation systems, an operator's view is limited to images provided by one or more cameras mounted on the remote vehicle. Traditional methods of vehicle teleoperation require that a real time stream of images is transmitted from the vehicle camera to the operator control station, and the operator steers the vehicle accordingly. For this type of teleoperation, the transmission link between the vehicle and operator workstation must be very high bandwidth (because of the high volume of images required) and very low latency (because delayed images can cause operators to steer incorrectly). In many situations, such a high-bandwidth, low-latency communication link is unavailable or even technically impossible to provide. Supervised TeleRobotics using Incremental Polyhedral Earth geometry, or STRIPE, is a teleoperation system for a robot vehicle that allows a human operator to accurately control the remote vehicle across very low bandwidth communication links, and communication links with large delays. In STRIPE, a single image from a camera mounted on the vehicle is transmitted to the operator workstation. The operator uses a mouse to pick a series of 'waypoints' in the image that define a path that the vehicle should follow. These 2D waypoints are then transmitted back to the vehicle, where they are used to compute the appropriate steering commands while the next image is being transmitted. STRIPE requires no advance knowledge of the terrain to be traversed, and can be used by novice operators with only minimal training. STRIPE is a unique combination of computer and human control. The computer must determine the 3D world path designated by the 2D waypoints and then accurately control the vehicle over rugged terrain. The human issues involve accurate path selection, and the prevention of disorientation, a common problem across all types of teleoperation systems. STRIPE is the only semi-autonomous teleoperation system that can accurately follow paths designated in monocular images on varying terrain. The thesis describes the STRIPE algorithm for tracking points using the incremental geometry model, insight into the design and redesign of the interface, an analysis of the effects of potential errors, details of the user studies, and hints on how to improve both the algorithm and interface for future designs.
Endoscopes with latest technology and concept.
Gotoh
2003-09-01
Endoscopic imaging systems that perform as the "eye" of the operator during endoscopic surgical procedures have developed rapidly due to various technological developments. In addition, since the most recent turn of the century robotic surgery has increased its scope through the utilization of systems such as Intuitive Surgical's da Vinci System. To optimize the imaging required for precise robotic surgery, a unique endoscope has been developed, consisting of both a two dimensional (2D) image optical system for wider observation of the entire surgical field, and a three dimensional (3D) image optical system for observation of the more precise details at the operative site. Additionally, a "near infrared radiation" endoscopic system is under development to detect the sentinel lymph node more readily. Such progress in the area of endoscopic imaging is expected to enhance the surgical procedure from both the patient's and the surgeon's point of view.
Development and testing of the EVS 2000 enhanced vision system
NASA Astrophysics Data System (ADS)
Way, Scott P.; Kerr, Richard; Imamura, Joe J.; Arnoldy, Dan; Zeylmaker, Richard; Zuro, Greg
2003-09-01
An effective enhanced vision system must operate over a broad spectral range in order to offer a pilot an optimized scene that includes runway background as well as airport lighting and aircraft operations. The large dynamic range of intensities of these images is best handled with separate imaging sensors. The EVS 2000 is a patented dual-band Infrared Enhanced Vision System (EVS) utilizing image fusion concepts to provide a single image from uncooled infrared imagers in both the LWIR and SWIR. The system is designed to provide commercial and corporate airline pilots with improved situational awareness at night and in degraded weather conditions. A prototype of this system was recently fabricated and flown on the Boeing Advanced Technology Demonstrator 737-900 aircraft. This paper will discuss the current EVS 2000 concept, show results taken from the Boeing Advanced Technology Demonstrator program, and discuss future plans for EVS systems.
A Electro-Optical Image Algebra Processing System for Automatic Target Recognition
NASA Astrophysics Data System (ADS)
Coffield, Patrick Cyrus
The proposed electro-optical image algebra processing system is designed specifically for image processing and other related computations. The design is a hybridization of an optical correlator and a massively paralleled, single instruction multiple data processor. The architecture of the design consists of three tightly coupled components: a spatial configuration processor (the optical analog portion), a weighting processor (digital), and an accumulation processor (digital). The systolic flow of data and image processing operations are directed by a control buffer and pipelined to each of the three processing components. The image processing operations are defined in terms of basic operations of an image algebra developed by the University of Florida. The algebra is capable of describing all common image-to-image transformations. The merit of this architectural design is how it implements the natural decomposition of algebraic functions into spatially distributed, point use operations. The effect of this particular decomposition allows convolution type operations to be computed strictly as a function of the number of elements in the template (mask, filter, etc.) instead of the number of picture elements in the image. Thus, a substantial increase in throughput is realized. The implementation of the proposed design may be accomplished in many ways. While a hybrid electro-optical implementation is of primary interest, the benefits and design issues of an all digital implementation are also discussed. The potential utility of this architectural design lies in its ability to control a large variety of the arithmetic and logic operations of the image algebra's generalized matrix product. The generalized matrix product is the most powerful fundamental operation in the algebra, thus allowing a wide range of applications. No other known device or design has made this claim of processing speed and general implementation of a heterogeneous image algebra.
Standoff concealed weapon detection using a 350-GHz radar imaging system
NASA Astrophysics Data System (ADS)
Sheen, David M.; Hall, Thomas E.; Severtsen, Ronald H.; McMakin, Douglas L.; Hatchell, Brian K.; Valdez, Patrick L. J.
2010-04-01
The sub-millimeter (sub-mm) wave frequency band from 300 - 1000 GHz is currently being developed for standoff concealed weapon detection imaging applications. This frequency band is of interest due to the unique combination of high resolution and clothing penetration. The Pacific Northwest National Laboratory (PNNL) is currently developing a 350 GHz, active, wideband, three-dimensional, radar imaging system to evaluate the feasibility of active sub-mm imaging for standoff detection. Standoff concealed weapon and explosive detection is a pressing national and international need for both civilian and military security, as it may allow screening at safer distances than portal screening techniques. PNNL has developed a prototype active wideband 350 GHz radar imaging system based on a wideband, heterodyne, frequency-multiplier-based transceiver system coupled to a quasi-optical focusing system and high-speed rotating conical scanner. This prototype system operates at ranges up to 10+ meters, and can acquire an image in 10 - 20 seconds, which is fast enough to scan cooperative personnel for concealed weapons. The wideband operation of this system provides accurate ranging information, and the images obtained are fully three-dimensional. During the past year, several improvements to the system have been designed and implemented, including increased imaging speed using improved balancing techniques, wider bandwidth, and improved image processing techniques. In this paper, the imaging system is described in detail and numerous imaging results are presented.
Dillman, Jonathan R.; Chen, Shigao; Davenport, Matthew S.; Zhao, Heng; Urban, Matthew W.; Song, Pengfei; Watcharotone, Kuanwong; Carson, Paul L.
2014-01-01
Background There is a paucity of data available regarding the repeatability and reproducibility of superficial shear wave speed (SWS) measurements at imaging depths relevant to the pediatric population. Purpose To assess the repeatability and reproducibility of superficial shear wave speed (SWS) measurements acquired from elasticity phantoms at varying imaging depths using three different imaging methods, two different ultrasound systems, and multiple operators. Methods and Materials Soft and hard elasticity phantoms manufactured by Computerized Imaging Reference Systems, Inc. (Norfolk, VA) were utilized for our investigation. Institution #1 used an Acuson S3000 ultrasound system (Siemens Medical Solutions USA, Inc.) and three different shear wave imaging method/transducer combinations, while institution #2 used an Aixplorer ultrasound system (Supersonic Imagine) and two different transducers. Ten stiffness measurements were acquired from each phantom at three depths (1.0, 2.5, and 4.0 cm) by four operators at each institution. Student’s t-test was used to compare SWS measurements between imaging techniques, while SWS measurement agreement was assessed with two-way random effects single measure intra-class correlation coefficients and coefficients of variation. Mixed model regression analysis determined the effect of predictor variables on SWS measurements. Results For the soft phantom, the average of mean SWS measurements across the various imaging methods and depths was 0.84 ± 0.04 m/s (mean ± standard deviation) for the Acuson S3000 system and 0.90 ± 0.02 m/s for the Aixplorer system (p=0.003). For the hard phantom, the average of mean SWS measurements across the various imaging methods and depths was 2.14 ± 0.08 m/s for the Acuson S3000 system and 2.07 ± 0.03 m/s Aixplorer system (p>0.05). The coefficients of variation were low (0.5–6.8%), and inter-operator agreement was near-perfect (ICCs ≥0.99). Shear wave imaging method and imaging depth significantly affected measured SWS (p<0.0001). Conclusions Superficial SWS measurements in elasticity phantoms demonstrate minimal variability across imaging method/transducer combinations, imaging depths, and between operators. The exact clinical significance of this variability is uncertain and may vary by organ and specific disease state. PMID:25249389
Image acquisition system for traffic monitoring applications
NASA Astrophysics Data System (ADS)
Auty, Glen; Corke, Peter I.; Dunn, Paul; Jensen, Murray; Macintyre, Ian B.; Mills, Dennis C.; Nguyen, Hao; Simons, Ben
1995-03-01
An imaging system for monitoring traffic on multilane highways is discussed. The system, named Safe-T-Cam, is capable of operating 24 hours per day in all but extreme weather conditions and can capture still images of vehicles traveling up to 160 km/hr. Systems operating at different remote locations are networked to allow transmission of images and data to a control center. A remote site facility comprises a vehicle detection and classification module (VCDM), an image acquisition module (IAM) and a license plate recognition module (LPRM). The remote site is connected to the central site by an ISDN communications network. The remote site system is discussed in this paper. The VCDM consists of a video camera, a specialized exposure control unit to maintain consistent image characteristics, and a 'real-time' image processing system that processes 50 images per second. The VCDM can detect and classify vehicles (e.g. cars from trucks). The vehicle class is used to determine what data should be recorded. The VCDM uses a vehicle tracking technique to allow optimum triggering of the high resolution camera of the IAM. The IAM camera combines the features necessary to operate consistently in the harsh environment encountered when imaging a vehicle 'head-on' in both day and night conditions. The image clarity obtained is ideally suited for automatic location and recognition of the vehicle license plate. This paper discusses the camera geometry, sensor characteristics and the image processing methods which permit consistent vehicle segmentation from a cluttered background allowing object oriented pattern recognition to be used for vehicle classification. The image capture of high resolution images and the image characteristics required for the LPRMs automatic reading of vehicle license plates, is also discussed. The results of field tests presented demonstrate that the vision based Safe-T-Cam system, currently installed on open highways, is capable of producing automatic classification of vehicle class and recording of vehicle numberplates with a success rate around 90 percent in a period of 24 hours.
Smart Stylet: The development and use of a bedside external ventricular drain image-guidance system
Patil, Vaibhav; Gupta, Rajiv; Estépar, Raúl San José; Lacson, Ronilda; Cheung, Arnold; Wong, Judith M.; Popp, A. John; Golby, Alexandra; Ogilvy, Christopher; Vosburgh, Kirby G.
2015-01-01
Background Placement accuracy of ventriculostomy catheters is reported in a wide and variable range. Development of an efficient image-guidance system may improve physician performance and patient safety. Objective We evaluate the prototype of Smart Stylet, a new electromagnetic image-guidance system for use during bedside ventriculostomy. Methods Accuracy of the Smart Stylet system was assessed. System operators were evaluated for their ability to successfully target the ipsilateral frontal horn in a phantom model. Results Target registration error across 15 intracranial targets ranged from 1.3 – 4.6 mm (mean 3.1 mm). Using Smart Stylet guidance, a test operator successfully passed a ventriculostomy catheter to a shifted ipsilateral frontal horn 20/20 (100%) times from the frontal approach in a skull phantom. Without Smart Stylet guidance, the operator was successful 4/10 (40 %) from the right frontal approach and 6/10 (60 %) from the left frontal approach. In a separate experiment, resident operators were successful 2/4 (50%) when targeting the shifted ipsilateral frontal horn with Smart Stylet guidance and 0/4 (0 %) without image-guidance using a skull phantom. Conclusions Smart Stylet may improve the ability to successfully target the ventricles during frontal ventriculostomy. PMID:25662506
Robustness of speckle imaging techniques applied to horizontal imaging scenarios
NASA Astrophysics Data System (ADS)
Bos, Jeremy P.
Atmospheric turbulence near the ground severely limits the quality of imagery acquired over long horizontal paths. In defense, surveillance, and border security applications, there is interest in deploying man-portable, embedded systems incorporating image reconstruction to improve the quality of imagery available to operators. To be effective, these systems must operate over significant variations in turbulence conditions while also subject to other variations due to operation by novice users. Systems that meet these requirements and are otherwise designed to be immune to the factors that cause variation in performance are considered robust. In addition to robustness in design, the portable nature of these systems implies a preference for systems with a minimum level of computational complexity. Speckle imaging methods are one of a variety of methods recently been proposed for use in man-portable horizontal imagers. In this work, the robustness of speckle imaging methods is established by identifying a subset of design parameters that provide immunity to the expected variations in operating conditions while minimizing the computation time necessary for image recovery. This performance evaluation is made possible using a novel technique for simulating anisoplanatic image formation. I find that incorporate as few as 15 image frames and 4 estimates of the object phase per reconstructed frame provide an average reduction of 45% reduction in Mean Squared Error (MSE) and 68% reduction in deviation in MSE. In addition, the Knox-Thompson phase recovery method is demonstrated to produce images in half the time required by the bispectrum. Finally, it is shown that certain blind image quality metrics can be used in place of the MSE to evaluate reconstruction quality in field scenarios. Using blind metrics rather depending on user estimates allows for reconstruction quality that differs from the minimum MSE by as little as 1%, significantly reducing the deviation in performance due to user action.
47 CFR 15.513 - Technical requirements for medical imaging systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Technical requirements for medical imaging systems. 15.513 Section 15.513 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Ultra-Wideband Operation § 15.513 Technical requirements for medical imaging systems. (a) The UWB...
NASA Astrophysics Data System (ADS)
Sun, Yi; You, Sixian; Tu, Haohua; Spillman, Darold R.; Marjanovic, Marina; Chaney, Eric J.; Liu, George Z.; Ray, Partha S.; Higham, Anna; Boppart, Stephen A.
2017-02-01
Label-free multi-photon imaging has been a powerful tool for studying tissue microstructures and biochemical distributions, particularly for investigating tumors and their microenvironments. However, it remains challenging for traditional bench-top multi-photon microscope systems to conduct ex vivo tumor tissue imaging in the operating room due to their bulky setups and laser sources. In this study, we designed, built, and clinically demonstrated a portable multi-modal nonlinear label-free microscope system that combined four modalities, including two- and three- photon fluorescence for studying the distributions of FAD and NADH, and second and third harmonic generation, respectively, for collagen fiber structures and the distribution of micro-vesicles found in tumors and the microenvironment. Optical realignments and switching between modalities were motorized for more rapid and efficient imaging and for a light-tight enclosure, reducing ambient light noise to only 5% within the brightly lit operating room. Using up to 20 mW of laser power after a 20x objective, this system can acquire multi-modal sets of images over 600 μm × 600 μm at an acquisition rate of 60 seconds using galvo-mirror scanning. This portable microscope system was demonstrated in the operating room for imaging fresh, resected, unstained breast tissue specimens, and for assessing tumor margins and the tumor microenvironment. This real-time label-free nonlinear imaging system has the potential to uniquely characterize breast cancer margins and the microenvironment of tumors to intraoperatively identify structural, functional, and molecular changes that could indicate the aggressiveness of the tumor.
NASA Technical Reports Server (NTRS)
Kruse, F. A.; Lefkoff, A. B.; Boardman, J. W.; Heidebrecht, K. B.; Shapiro, A. T.; Barloon, P. J.; Goetz, A. F. H.
1993-01-01
The Center for the Study of Earth from Space (CSES) at the University of Colorado, Boulder, has developed a prototype interactive software system called the Spectral Image Processing System (SIPS) using IDL (the Interactive Data Language) on UNIX-based workstations. SIPS is designed to take advantage of the combination of high spectral resolution and spatial data presentation unique to imaging spectrometers. It streamlines analysis of these data by allowing scientists to rapidly interact with entire datasets. SIPS provides visualization tools for rapid exploratory analysis and numerical tools for quantitative modeling. The user interface is X-Windows-based, user friendly, and provides 'point and click' operation. SIPS is being used for multidisciplinary research concentrating on use of physically based analysis methods to enhance scientific results from imaging spectrometer data. The objective of this continuing effort is to develop operational techniques for quantitative analysis of imaging spectrometer data and to make them available to the scientific community prior to the launch of imaging spectrometer satellite systems such as the Earth Observing System (EOS) High Resolution Imaging Spectrometer (HIRIS).
Theoretical foundations of spatially-variant mathematical morphology part ii: gray-level images.
Bouaynaya, Nidhal; Schonfeld, Dan
2008-05-01
In this paper, we develop a spatially-variant (SV) mathematical morphology theory for gray-level signals and images in the Euclidean space. The proposed theory preserves the geometrical concept of the structuring function, which provides the foundation of classical morphology and is essential in signal and image processing applications. We define the basic SV gray-level morphological operators (i.e., SV gray-level erosion, dilation, opening, and closing) and investigate their properties. We demonstrate the ubiquity of SV gray-level morphological systems by deriving a kernel representation for a large class of systems, called V-systems, in terms of the basic SV graylevel morphological operators. A V-system is defined to be a gray-level operator, which is invariant under gray-level (vertical) translations. Particular attention is focused on the class of SV flat gray-level operators. The kernel representation for increasing V-systems is a generalization of Maragos' kernel representation for increasing and translation-invariant function-processing systems. A representation of V-systems in terms of their kernel elements is established for increasing and upper-semi-continuous V-systems. This representation unifies a large class of spatially-variant linear and non-linear systems under the same mathematical framework. Finally, simulation results show the potential power of the general theory of gray-level spatially-variant mathematical morphology in several image analysis and computer vision applications.
Systems and methods for thermal imaging technique for measuring mixing of fluids
Booten, Charles; Tomerlin, Jeff; Winkler, Jon
2016-06-14
Systems and methods for thermal imaging for measuring mixing of fluids are provided. In one embodiment, a method for measuring mixing of gaseous fluids using thermal imaging comprises: positioning a thermal test medium parallel to a direction gaseous fluid flow from an outlet vent of a momentum source, wherein when the source is operating, the fluid flows across a surface of the medium; obtaining an ambient temperature value from a baseline thermal image of the surface; obtaining at least one operational thermal image of the surface when the fluid is flowing from the outlet vent across the surface, wherein the fluid has a temperature different than the ambient temperature; and calculating at least one temperature-difference fraction associated with at least a first position on the surface based on a difference between temperature measurements obtained from the at least one operational thermal image and the ambient temperature value.
Cellular resolution functional imaging in behaving rats using voluntary head restraint
Scott, Benjamin B.; Brody, Carlos D.; Tank, David W.
2013-01-01
SUMMARY High-throughput operant conditioning systems for rodents provide efficient training on sophisticated behavioral tasks. Combining these systems with technologies for cellular resolution functional imaging would provide a powerful approach to study neural dynamics during behavior. Here we describe an integrated two-photon microscope and behavioral apparatus that allows cellular resolution functional imaging of cortical regions during epochs of voluntary head restraint. Rats were trained to initiate periods of restraint up to 8 seconds in duration, which provided the mechanical stability necessary for in vivo imaging while allowing free movement between behavioral trials. A mechanical registration system repositioned the head to within a few microns, allowing the same neuronal populations to be imaged on each trial. In proof-of-principle experiments, calcium dependent fluorescence transients were recorded from GCaMP-labeled cortical neurons. In contrast to previous methods for head restraint, this system can also be incorporated into high-throughput operant conditioning systems. PMID:24055015
Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong
2016-08-01
Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.
Coherent white light amplification
Jovanovic, Igor; Barty, Christopher P.
2004-05-25
A system for coherent simultaneous amplification of a broad spectral range of light that includes an optical parametric amplifier and a source of a seed pulse is described. A first angular dispersive element is operatively connected to the source of a seed pulse. A first imaging telescope is operatively connected to the first angular dispersive element and operatively connected to the optical parametric amplifier. A source of a pump pulse is operatively connected to the optical parametric amplifier. A second imaging telescope is operatively connected to the optical parametric amplifier and a second angular dispersive element is operatively connected to the second imaging telescope.
2014-08-04
Resident Space Object Proximity Analysis and IMAging) mission is carried out by a 6U Cube Sat class satellite equipped with a warm gas propulsion system... mission . The ARAPAIMA (Application for Resident Space Object Proximity Analysis and IMAging) mission is carried out by a 6 U CubeSat class satellite...attitude determination and control subsystem (ADCS) (or a proximity operation and imaging satellite mission . The ARAP AI MA (Application for
Laser-Directed Ranging System Implementing Single Camera System for Telerobotics Applications
NASA Technical Reports Server (NTRS)
Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1995-01-01
The invention relates generally to systems for determining the range of an object from a reference point and, in one embodiment, to laser-directed ranging systems useful in telerobotics applications. Digital processing techniques are employed which minimize the complexity and cost of the hardware and software for processing range calculations, thereby enhancing the commercial attractiveness of the system for use in relatively low-cost robotic systems. The system includes a video camera for generating images of the target, image digitizing circuitry, and an associated frame grabber circuit. The circuit first captures one of the pairs of stereo video images of the target, and then captures a second video image of the target as it is partly illuminated by the light beam, suitably generated by a laser. The two video images, taken sufficiently close together in time to minimize camera and scene motion, are converted to digital images and then compared. Common pixels are eliminated, leaving only a digital image of the laser-illuminated spot on the target. Mw centroid of the laser illuminated spot is dm obtained and compared with a predetermined reference point, predetermined by design or calibration, which represents the coordinate at the focal plane of the laser illumination at infinite range. Preferably, the laser and camera are mounted on a servo-driven platform which can be oriented to direct the camera and the laser toward the target. In one embodiment the platform is positioned in response to movement of the operator's head. Position and orientation sensors are used to monitor head movement. The disparity between the digital image of the laser spot and the reference point is calculated for determining range to the target. Commercial applications for the system relate to active range-determination systems, such as those used with robotic systems in which it is necessary to determine the, range to a workpiece or object to be grasped or acted upon by a robot arm end-effector in response to commands generated by an operator. In one embodiment, the system provides a real-time image of the target for the operator as the robot approaches the object. The system is also adapted for use in virtual reality systems in which a remote object or workpiece is to be acted upon by a remote robot arm or other mechanism controlled by an operator.
Systems workplace for endoscopic surgery.
Irion, K M; Novak, P
2000-01-01
With the advent of minimally invasive surgery (MIS) a decade ago, the requirements for operating rooms (OR) and their equipment have been increased. Compared with conventional open surgery, the new endoscopic techniques require additional tools. Television systems, for video-assisted image acquisition and visualisation, including cameras, monitors and light systems, as well as insufflators, pumps, high-frequency units, lasers and motorised therapy units, are nowadays usually made available on carts during endoscopic surgery. In conjunction with a set of endoscopic instruments, these high-tech units allow new operating techniques to be performed. The benefit for patients has become clear in recent years; however, the technical complexity of OR has also increased considerably. To minimise this problem for the OR personnel, the MIS concept 'OR1' (Operating Room 1) was developed and implemented. OR1 is a fully functional and integrated multi-speciality surgical suite for MIS. The centrepieces of the OR1 are the Storz Communication Bus (SCB) and the advanced image and data archiving system (Aida) from Karl Storz, Tuttlingen, Germany. Both components allow monitoring, access and networking of the MIS equipment and other OR facilities, as well as the acquisition, storage and display of image, patient and equipment data during the endoscopic procedure. A central user interface allows efficient, simplified operation and online clinical images. Due to the system integration, the handling of complex equipment is considerably simplified, logistical procedures in the OR are improved, procedure times are shorter and, particularly noteworthy, operative risk can be reduced through simplified device operation.
Small Interactive Image Processing System (SMIPS) system description
NASA Technical Reports Server (NTRS)
Moik, J. G.
1973-01-01
The Small Interactive Image Processing System (SMIPS) operates under control of the IBM-OS/MVT operating system and uses an IBM-2250 model 1 display unit as interactive graphic device. The input language in the form of character strings or attentions from keys and light pen is interpreted and causes processing of built-in image processing functions as well as execution of a variable number of application programs kept on a private disk file. A description of design considerations is given and characteristics, structure and logic flow of SMIPS are summarized. Data management and graphic programming techniques used for the interactive manipulation and display of digital pictures are also discussed.
Mobashsher, Ahmed Toaha; Abbosh, A M
2016-11-29
Rapid, on-the-spot diagnostic and monitoring systems are vital for the survival of patients with intracranial hematoma, as their conditions drastically deteriorate with time. To address the limited accessibility, high costs and static structure of currently used MRI and CT scanners, a portable non-invasive multi-slice microwave imaging system is presented for accurate 3D localization of hematoma inside human head. This diagnostic system provides fast data acquisition and imaging compared to the existing systems by means of a compact array of low-profile, unidirectional antennas with wideband operation. The 3D printed low-cost and portable system can be installed in an ambulance for rapid on-site diagnosis by paramedics. In this paper, the multi-slice head imaging system's operating principle is numerically analysed and experimentally validated on realistic head phantoms. Quantitative analyses demonstrate that the multi-slice head imaging system is able to generate better quality reconstructed images providing 70% higher average signal to clutter ratio, 25% enhanced maximum signal to clutter ratio and with around 60% hematoma target localization compared to the previous head imaging systems. Nevertheless, numerical and experimental results demonstrate that previous reported 2D imaging systems are vulnerable to localization error, which is overcome in the presented multi-slice 3D imaging system. The non-ionizing system, which uses safe levels of very low microwave power, is also tested on human subjects. Results of realistic phantom and subjects demonstrate the feasibility of the system in future preclinical trials.
Interactive data-processing system for metallurgy
NASA Technical Reports Server (NTRS)
Rathz, T. J.
1978-01-01
Equipment indicates that system can rapidly and accurately process metallurgical and materials-processing data for wide range of applications. Advantages include increase in contract between areas on image, ability to analyze images via operator-written programs, and space available for storing images.
A low-cost multimodal head-mounted display system for neuroendoscopic surgery.
Xu, Xinghua; Zheng, Yi; Yao, Shujing; Sun, Guochen; Xu, Bainan; Chen, Xiaolei
2018-01-01
With rapid advances in technology, wearable devices as head-mounted display (HMD) have been adopted for various uses in medical science, ranging from simply aiding in fitness to assisting surgery. We aimed to investigate the feasibility and practicability of a low-cost multimodal HMD system in neuroendoscopic surgery. A multimodal HMD system, mainly consisted of a HMD with two built-in displays, an action camera, and a laptop computer displaying reconstructed medical images, was developed to assist neuroendoscopic surgery. With this intensively integrated system, the neurosurgeon could freely switch between endoscopic image, three-dimensional (3D) reconstructed virtual endoscopy images, and surrounding environment images. Using a leap motion controller, the neurosurgeon could adjust or rotate the 3D virtual endoscopic images at a distance to better understand the positional relation between lesions and normal tissues at will. A total of 21 consecutive patients with ventricular system diseases underwent neuroendoscopic surgery with the aid of this system. All operations were accomplished successfully, and no system-related complications occurred. The HMD was comfortable to wear and easy to operate. Screen resolution of the HMD was high enough for the neurosurgeon to operate carefully. With the system, the neurosurgeon might get a better comprehension on lesions by freely switching among images of different modalities. The system had a steep learning curve, which meant a quick increment of skill with it. Compared with commercially available surgical assistant instruments, this system was relatively low-cost. The multimodal HMD system is feasible, practical, helpful, and relatively cost efficient in neuroendoscopic surgery.
Electronic imaging system and technique
Bolstad, J.O.
1984-06-12
A method and system for viewing objects obscurred by intense plasmas or flames (such as a welding arc) includes a pulsed light source to illuminate the object, the peak brightness of the light reflected from the object being greater than the brightness of the intense plasma or flame; an electronic image sensor for detecting a pulsed image of the illuminated object, the sensor being operated as a high-speed shutter; and electronic means for synchronizing the shutter operation with the pulsed light source.
Electronic imaging system and technique
Bolstad, Jon O.
1987-01-01
A method and system for viewing objects obscurred by intense plasmas or flames (such as a welding arc) includes a pulsed light source to illuminate the object, the peak brightness of the light reflected from the object being greater than the brightness of the intense plasma or flame; an electronic image sensor for detecting a pulsed image of the illuminated object, the sensor being operated as a high-speed shutter; and electronic means for synchronizing the shutter operation with the pulsed light source.
NASA Technical Reports Server (NTRS)
Wilcox, Brian H.
1994-01-01
System for remote control of robotic land vehicle requires only small radio-communication bandwidth. Twin video cameras on vehicle create stereoscopic images. Operator views cross-polarized images on two cathode-ray tubes through correspondingly polarized spectacles. By use of cursor on frozen image, remote operator designates path. Vehicle proceeds to follow path, by use of limited degree of autonomous control to cope with unexpected conditions. System concept, called "computer-aided remote driving" (CARD), potentially useful in exploration of other planets, military surveillance, firefighting, and clean-up of hazardous materials.
The impact of the condenser on cytogenetic image quality in digital microscope system.
Ren, Liqiang; Li, Zheng; Li, Yuhua; Zheng, Bin; Li, Shibo; Chen, Xiaodong; Liu, Hong
2013-01-01
Optimizing operational parameters of the digital microscope system is an important technique to acquire high quality cytogenetic images and facilitate the process of karyotyping so that the efficiency and accuracy of diagnosis can be improved. This study investigated the impact of the condenser on cytogenetic image quality and system working performance using a prototype digital microscope image scanning system. Both theoretical analysis and experimental validations through objectively evaluating a resolution test chart and subjectively observing large numbers of specimen were conducted. The results show that the optimal image quality and large depth of field (DOF) are simultaneously obtained when the numerical aperture of condenser is set as 60%-70% of the corresponding objective. Under this condition, more analyzable chromosomes and diagnostic information are obtained. As a result, the system shows higher working stability and less restriction for the implementation of algorithms such as autofocusing especially when the system is designed to achieve high throughput continuous image scanning. Although the above quantitative results were obtained using a specific prototype system under the experimental conditions reported in this paper, the presented evaluation methodologies can provide valuable guidelines for optimizing operational parameters in cytogenetic imaging using the high throughput continuous scanning microscopes in clinical practice.
Adapting smartphones for low-cost optical medical imaging
NASA Astrophysics Data System (ADS)
Pratavieira, Sebastião.; Vollet-Filho, José D.; Carbinatto, Fernanda M.; Blanco, Kate; Inada, Natalia M.; Bagnato, Vanderlei S.; Kurachi, Cristina
2015-06-01
Optical images have been used in several medical situations to improve diagnosis of lesions or to monitor treatments. However, most systems employ expensive scientific (CCD or CMOS) cameras and need computers to display and save the images, usually resulting in a high final cost for the system. Additionally, this sort of apparatus operation usually becomes more complex, requiring more and more specialized technical knowledge from the operator. Currently, the number of people using smartphone-like devices with built-in high quality cameras is increasing, which might allow using such devices as an efficient, lower cost, portable imaging system for medical applications. Thus, we aim to develop methods of adaptation of those devices to optical medical imaging techniques, such as fluorescence. Particularly, smartphones covers were adapted to connect a smartphone-like device to widefield fluorescence imaging systems. These systems were used to detect lesions in different tissues, such as cervix and mouth/throat mucosa, and to monitor ALA-induced protoporphyrin-IX formation for photodynamic treatment of Cervical Intraepithelial Neoplasia. This approach may contribute significantly to low-cost, portable and simple clinical optical imaging collection.
NASA Astrophysics Data System (ADS)
Vnukov, A. A.; Shershnev, M. B.
2018-01-01
The aim of this work is the software implementation of three image scaling algorithms using parallel computations, as well as the development of an application with a graphical user interface for the Windows operating system to demonstrate the operation of algorithms and to study the relationship between system performance, algorithm execution time and the degree of parallelization of computations. Three methods of interpolation were studied, formalized and adapted to scale images. The result of the work is a program for scaling images by different methods. Comparison of the quality of scaling by different methods is given.
Image-guided laparoscopic surgery in an open MRI operating theater.
Tsutsumi, Norifumi; Tomikawa, Morimasa; Uemura, Munenori; Akahoshi, Tomohiko; Nagao, Yoshihiro; Konishi, Kozo; Ieiri, Satoshi; Hong, Jaesung; Maehara, Yoshihiko; Hashizume, Makoto
2013-06-01
The recent development of open magnetic resonance imaging (MRI) has provided an opportunity for the next stage of image-guided surgical and interventional procedures. The purpose of this study was to evaluate the feasibility of laparoscopic surgery under the pneumoperitoneum with the system of an open MRI operating theater. Five patients underwent laparoscopic surgery with a real-time augmented reality navigation system that we previously developed in a horizontal-type 0.4-T open MRI operating theater. All procedures were performed in an open MRI operating theater. During the operations, the laparoscopic monitor clearly showed the augmented reality models of the intraperitoneal structures, such as the common bile ducts and the urinary bladder, as well as the proper positions of the prosthesis. The navigation frame rate was 8 frames per min. The mean fiducial registration error was 6.88 ± 6.18 mm in navigated cases. We were able to use magnetic resonance-incompatible surgical instruments out of the 5-Gs restriction area, as well as conventional laparoscopic surgery, and we developed a real-time augmented reality navigation system using open MRI. Laparoscopic surgery with our real-time augmented reality navigation system in the open MRI operating theater is a feasible option.
Lopes, Gil; Ribeiro, A Fernando; Sillero, Neftalí; Gonçalves-Seco, Luís; Silva, Cristiano; Franch, Marc; Trigueiros, Paulo
2016-04-19
This paper presents a road surface scanning system that operates with a trichromatic line scan camera with light emitting diode (LED) lighting achieving road surface resolution under a millimeter. It was part of a project named Roadkills-Intelligent systems for surveying mortality of amphibians in Portuguese roads, sponsored by the Portuguese Science and Technology Foundation. A trailer was developed in order to accommodate the complete system with standalone power generation, computer image capture and recording, controlled lighting to operate day or night without disturbance, incremental encoder with 5000 pulses per revolution attached to one of the trailer wheels, under a meter Global Positioning System (GPS) localization, easy to utilize with any vehicle with a trailer towing system and focused on a complete low cost solution. The paper describes the system architecture of the developed prototype, its calibration procedure, the performed experimentation and some obtained results, along with a discussion and comparison with existing systems. Sustained operating trailer speeds of up to 30 km/h are achievable without loss of quality at 4096 pixels' image width (1 m width of road surface) with 250 µm/pixel resolution. Higher scanning speeds can be achieved by lowering the image resolution (120 km/h with 1 mm/pixel). Computer vision algorithms are under development to operate on the captured images in order to automatically detect road-kills of amphibians.
Lopes, Gil; Ribeiro, A. Fernando; Sillero, Neftalí; Gonçalves-Seco, Luís; Silva, Cristiano; Franch, Marc; Trigueiros, Paulo
2016-01-01
This paper presents a road surface scanning system that operates with a trichromatic line scan camera with light emitting diode (LED) lighting achieving road surface resolution under a millimeter. It was part of a project named Roadkills—Intelligent systems for surveying mortality of amphibians in Portuguese roads, sponsored by the Portuguese Science and Technology Foundation. A trailer was developed in order to accommodate the complete system with standalone power generation, computer image capture and recording, controlled lighting to operate day or night without disturbance, incremental encoder with 5000 pulses per revolution attached to one of the trailer wheels, under a meter Global Positioning System (GPS) localization, easy to utilize with any vehicle with a trailer towing system and focused on a complete low cost solution. The paper describes the system architecture of the developed prototype, its calibration procedure, the performed experimentation and some obtained results, along with a discussion and comparison with existing systems. Sustained operating trailer speeds of up to 30 km/h are achievable without loss of quality at 4096 pixels’ image width (1 m width of road surface) with 250 µm/pixel resolution. Higher scanning speeds can be achieved by lowering the image resolution (120 km/h with 1 mm/pixel). Computer vision algorithms are under development to operate on the captured images in order to automatically detect road-kills of amphibians. PMID:27104535
Battlefield radar imaging through airborne millimetric wave SAR (Synthetic Aperture Radar)
NASA Astrophysics Data System (ADS)
Carletti, U.; Daddio, E.; Farina, A.; Morabito, C.; Pangrazi, R.; Studer, F. A.
Airborne synthetic aperture radar (SAR), operating in the millimetric-wave (mmw) region, is discussed with reference to a battlefield surveillance application. The SAR system provides high resolution real-time imaging of the battlefield and moving target detection, under adverse environmental conditions (e.g., weather, dust, smoke, obscurants). The most relevant and original aspects of the system are the band of operation (i.e., mmw in lieu of the more traditional microwave region) and the use of an unmanned platform. The former implies reduced weight and size requirements, thus allowing use of small unmanned platforms. The latter enchances the system operational effectiveness by permitting accomplishment of recognition missions in depth beyond the FEBA. An overall system architecture based on the onboard sensor, the platform, the communication equipment, and a mobile ground station is described. The main areas of ongoing investigation are presented: the simulation of the end-to-end system, and the critical technological issues such as mmw antenna, transmitter, signal processor for image formation and platform attitude errors compensation and detection and imaging of moving targets.
Examples of subjective image quality enhancement in multimedia
NASA Astrophysics Data System (ADS)
Klíma, Miloš; Pazderák, Jiří; Fliegel, Karel
2007-09-01
The subjective image quality is an important issue in all multimedia imaging systems with a significant impact onto QoS (Quality of Service). For long time the image fidelity criterion was widely applied in technical systems esp. in both television and image source compression fields but the optimization of subjective perception quality and fidelity approach (such as the minimum of MSE) are very different. The paper presents an experimental testing of three different digital techniques for the subjective image quality enhancement - color saturation, edge enhancement, denoising operators and noise addition - well known from both the digital photography and video. The evaluation has been done for extensive operator parameterization and the results are summarized and discussed. It has been demonstrated that there are relevant types of image corrections improving to some extent the subjective perception of the image. The above mentioned techniques have been tested for five image tests with significantly different image characteristics (fine details, large saturated color areas, high color contrast, easy-to-remember colors etc.). The experimental results show the way to optimized use of image enhancing operators. Finally the concept of impressiveness as a new possible expression of subjective quality improvement is presented and discussed.
Image Display And Manipulation System (IDAMS), user's guide
NASA Technical Reports Server (NTRS)
Cecil, R. W.
1972-01-01
A combination operator's guide and user's handbook for the Image Display and Manipulation System (IDAMS) is reported. Information is presented to define how to operate the computer equipment, how to structure a run deck, and how to select parameters necessary for executing a sequence of IDAMS task routines. If more detailed information is needed on any IDAMS program, see the IDAMS program documentation.
NASA Astrophysics Data System (ADS)
Sakano, Toshikazu; Furukawa, Isao; Okumura, Akira; Yamaguchi, Takahiro; Fujii, Tetsuro; Ono, Sadayasu; Suzuki, Junji; Matsuya, Shoji; Ishihara, Teruo
2001-08-01
The wide spread of digital technology in the medical field has led to a demand for the high-quality, high-speed, and user-friendly digital image presentation system in the daily medical conferences. To fulfill this demand, we developed a presentation system for radiological and pathological images. It is composed of a super-high-definition (SHD) imaging system, a radiological image database (R-DB), a pathological image database (P-DB), and the network interconnecting these three. The R-DB consists of a 270GB RAID, a database server workstation, and a film digitizer. The P-DB includes an optical microscope, a four-million-pixel digital camera, a 90GB RAID, and a database server workstation. A 100Mbps Ethernet LAN interconnects all the sub-systems. The Web-based system operation software was developed for easy operation. We installed the whole system in NTT East Kanto Hospital to evaluate it in the weekly case conferences. The SHD system could display digital full-color images of 2048 x 2048 pixels on a 28-inch CRT monitor. The doctors evaluated the image quality and size, and found them applicable to the actual medical diagnosis. They also appreciated short image switching time that contributed to smooth presentation. Thus, we confirmed that its characteristics met the requirements.
NASA Astrophysics Data System (ADS)
Markham, James; Cosgrove, Joseph; Scire, James; Haldeman, Charles; Agoos, Ian
2014-12-01
This paper announces the implementation of a long wavelength infrared camera to obtain high-speed thermal images of an aircraft engine's in-service thermal barrier coated turbine blades. Long wavelength thermal images were captured of first-stage blades. The achieved temporal and spatial resolutions allowed for the identification of cooling-hole locations. The software and synchronization components of the system allowed for the selection of any blade on the turbine wheel, with tuning capability to image from leading edge to trailing edge. Its first application delivered calibrated thermal images as a function of turbine rotational speed at both steady state conditions and during engine transients. In advance of presenting these data for the purpose of understanding engine operation, this paper focuses on the components of the system, verification of high-speed synchronized operation, and the integration of the system with the commercial jet engine test bed.
High-intensity power-resolved radiation imaging of an operational nuclear reactor.
Beaumont, Jonathan S; Mellor, Matthew P; Villa, Mario; Joyce, Malcolm J
2015-10-09
Knowledge of the neutron distribution in a nuclear reactor is necessary to ensure the safe and efficient burnup of reactor fuel. Currently these measurements are performed by in-core systems in what are extremely hostile environments and in most reactor accident scenarios it is likely that these systems would be damaged. Here we present a compact and portable radiation imaging system with the ability to image high-intensity fast-neutron and gamma-ray fields simultaneously. This system has been deployed to image radiation fields emitted during the operation of a TRIGA test reactor allowing a spatial visualization of the internal reactor conditions to be obtained. The imaged flux in each case is found to scale linearly with reactor power indicating that this method may be used for power-resolved reactor monitoring and for the assay of ongoing nuclear criticalities in damaged nuclear reactors.
Markham, James; Cosgrove, Joseph; Scire, James; Haldeman, Charles; Agoos, Ian
2014-12-01
This paper announces the implementation of a long wavelength infrared camera to obtain high-speed thermal images of an aircraft engine's in-service thermal barrier coated turbine blades. Long wavelength thermal images were captured of first-stage blades. The achieved temporal and spatial resolutions allowed for the identification of cooling-hole locations. The software and synchronization components of the system allowed for the selection of any blade on the turbine wheel, with tuning capability to image from leading edge to trailing edge. Its first application delivered calibrated thermal images as a function of turbine rotational speed at both steady state conditions and during engine transients. In advance of presenting these data for the purpose of understanding engine operation, this paper focuses on the components of the system, verification of high-speed synchronized operation, and the integration of the system with the commercial jet engine test bed.
High-intensity power-resolved radiation imaging of an operational nuclear reactor
Beaumont, Jonathan S.; Mellor, Matthew P.; Villa, Mario; Joyce, Malcolm J.
2015-01-01
Knowledge of the neutron distribution in a nuclear reactor is necessary to ensure the safe and efficient burnup of reactor fuel. Currently these measurements are performed by in-core systems in what are extremely hostile environments and in most reactor accident scenarios it is likely that these systems would be damaged. Here we present a compact and portable radiation imaging system with the ability to image high-intensity fast-neutron and gamma-ray fields simultaneously. This system has been deployed to image radiation fields emitted during the operation of a TRIGA test reactor allowing a spatial visualization of the internal reactor conditions to be obtained. The imaged flux in each case is found to scale linearly with reactor power indicating that this method may be used for power-resolved reactor monitoring and for the assay of ongoing nuclear criticalities in damaged nuclear reactors. PMID:26450669
NASA Technical Reports Server (NTRS)
Thompson, T. W.
1983-01-01
Airborne synthetic aperture radars and scatterometers are operated with the goals of acquiring data to support shuttle imaging radars and support ongoing basic active microwave remote sensing research. The aircraft synthetic aperture radar is an L-band system at the 25-cm wavelength and normally operates on the CV-990 research aircraft. This radar system will be upgraded to operate at both the L-band and C-band. The aircraft scatterometers are two independent radar systems that operate at 6.3-cm and 18.8-cm wavelengths. They are normally flown on the C-130 research aircraft. These radars will be operated on 10 data flights each year to provide data to NASA-approved users. Data flights will be devoted to Shuttle Imaging Radar-B (SIR-B) underflights. Standard data products for the synthetic aperture radars include both optical and digital images. Standard data products for the scatterometers include computer compatible tapes with listings of radar cross sections (sigma-nought) versus angle of incidence. An overview of these radars and their operational procedures is provided by this user's manual.
Global Ultraviolet Imaging Processing for the GGS Polar Visible Imaging System (VIS)
NASA Technical Reports Server (NTRS)
Frank, L. A.
1997-01-01
The Visible Imaging System (VIS) on Polar spacecraft of the NASA Goddard Space Flight Center was launched into orbit about Earth on February 24, 1996. Since shortly after launch, the Earth Camera subsystem of the VIS has been operated nearly continuously to acquire far ultraviolet, global images of Earth and its northern and southern auroral ovals. The only exceptions to this continuous imaging occurred for approximately 10 days at the times of the Polar spacecraft re-orientation maneuvers in October, 1996 and April, 1997. Since launch, approximately 525,000 images have been acquired with the VIS Earth Camera. The VIS instrument operational health continues to be excellent. Since launch, all systems have operated nominally with all voltages, currents, and temperatures remaining at nominal values. In addition, the sensitivity of the Earth Camera to ultraviolet light has remained constant throughout the operation period. Revised flight software was uploaded to the VIS in order to compensate for the spacecraft wobble. This is accomplished by electronic shuttering of the sensor in synchronization with the 6-second period of the wobble, thus recovering the original spatial resolution obtainable with the VIS Earth Camera. In addition, software patches were uploaded to make the VIS immune to signal dropouts that occur in the sliprings of the despun platform mechanism. These changes have worked very well. The VIS and in particular the VIS Earth Camera is fully operational and will continue to acquire global auroral images as the sun progresses toward solar maximum conditions after the turn of the century.
Automated baseline change detection -- Phases 1 and 2. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byler, E.
1997-10-31
The primary objective of this project is to apply robotic and optical sensor technology to the operational inspection of mixed toxic and radioactive waste stored in barrels, using Automated Baseline Change Detection (ABCD), based on image subtraction. Absolute change detection is based on detecting any visible physical changes, regardless of cause, between a current inspection image of a barrel and an archived baseline image of the same barrel. Thus, in addition to rust, the ABCD system can also detect corrosion, leaks, dents, and bulges. The ABCD approach and method rely on precise camera positioning and repositioning relative to the barrelmore » and on feature recognition in images. The ABCD image processing software was installed on a robotic vehicle developed under a related DOE/FETC contract DE-AC21-92MC29112 Intelligent Mobile Sensor System (IMSS) and integrated with the electronics and software. This vehicle was designed especially to navigate in DOE Waste Storage Facilities. Initial system testing was performed at Fernald in June 1996. After some further development and more extensive integration the prototype integrated system was installed and tested at the Radioactive Waste Management Facility (RWMC) at INEEL beginning in April 1997 through the present (November 1997). The integrated system, composed of ABCD imaging software and IMSS mobility base, is called MISS EVE (Mobile Intelligent Sensor System--Environmental Validation Expert). Evaluation of the integrated system in RWMC Building 628, containing approximately 10,000 drums, demonstrated an easy to use system with the ability to properly navigate through the facility, image all the defined drums, and process the results into a report delivered to the operator on a GUI interface and on hard copy. Further work is needed to make the brassboard system more operationally robust.« less
NASA Astrophysics Data System (ADS)
Shectman, Stephen A.; Johns, Matthew
2003-02-01
Commissioning of the two 6.5-meter Magellan telescopes is nearing completion at the Las Campanas Observatory in Chile. The Magellan 1 primary mirror was successfully aluminized at Las Campanas in August 2000. Science operations at Magellan 1 began in February 2001. The second Nasmyth focus on Magellan 1 went into operation in September 2001. Science operations on Magellan 2 are scheduled to begin shortly. The ability to deliver high-quality images is maintained at all times by the simultaneous operation of the primary mirror support system, the primary mirror thermal control system, and a real-time active optics system, based on a Shack-Hartmann image analyzer. Residual aberrations in the delivered image (including focus) are typically 0.10-0.15" fwhm, and real images as good as 0.25" fwhm have been obtained at optical wavelengths. The mount points reliably to 2" rms over the entire sky, using a pointing model which is stable from year to year. The tracking error under typical wind conditions is better than 0.03" rms, although some degradation is observed under high wind conditions when the dome is pointed in an unfavorable direction. Instruments used at Magellan 1 during the first year of operation include two spectrographs previously used at other telescopes (B&C, LDSS-2), a mid-infrared imager (MIRAC) and an optical imager (MAGIC, the first Magellan-specific facility instrument). Two facility spectrographs are scheduled to be installed shortly: IMACS, a wide-field spectrograph, and MIKE, a double echelle spectrograph.
IMAGES: An interactive image processing system
NASA Technical Reports Server (NTRS)
Jensen, J. R.
1981-01-01
The IMAGES interactive image processing system was created specifically for undergraduate remote sensing education in geography. The system is interactive, relatively inexpensive to operate, almost hardware independent, and responsive to numerous users at one time in a time-sharing mode. Most important, it provides a medium whereby theoretical remote sensing principles discussed in lecture may be reinforced in laboratory as students perform computer-assisted image processing. In addition to its use in academic and short course environments, the system has also been used extensively to conduct basic image processing research. The flow of information through the system is discussed including an overview of the programs.
Development of a precision multimodal surgical navigation system for lung robotic segmentectomy
Soldea, Valentin; Lachkar, Samy; Rinieri, Philippe; Sarsam, Mathieu; Bottet, Benjamin; Peillon, Christophe
2018-01-01
Minimally invasive sublobar anatomical resection is becoming more and more popular to manage early lung lesions. Robotic-assisted thoracic surgery (RATS) is unique in comparison with other minimally invasive techniques. Indeed, RATS is able to better integrate multiple streams of information including advanced imaging techniques, in an immersive experience at the level of the robotic console. Our aim was to describe three-dimensional (3D) imaging throughout the surgical procedure from preoperative planning to intraoperative assistance and complementary investigations such as radial endobronchial ultrasound (R-EBUS) and virtual bronchoscopy for pleural dye marking. All cases were operated using the DaVinci SystemTM. Modelisation was provided by Visible Patient™ (Strasbourg, France). Image integration in the operative field was achieved using the Tile Pro multi display input of the DaVinci console. Our experience was based on 114 robotic segmentectomies performed between January 2012 and October 2017. The clinical value of 3D imaging integration was evaluated in 2014 in a pilot study. Progressively, we have reached the conclusion that the use of such an anatomic model improves the safety and reliability of procedures. The multimodal system including 3D imaging has been used in more than 40 patients so far and demonstrated a perfect operative anatomic accuracy. Currently, we are developing an original virtual reality experience by exploring 3D imaging models at the robotic console level. The act of operating is being transformed and the surgeon now oversees a complex system that improves decision making. PMID:29785294
Development of a precision multimodal surgical navigation system for lung robotic segmentectomy.
Baste, Jean Marc; Soldea, Valentin; Lachkar, Samy; Rinieri, Philippe; Sarsam, Mathieu; Bottet, Benjamin; Peillon, Christophe
2018-04-01
Minimally invasive sublobar anatomical resection is becoming more and more popular to manage early lung lesions. Robotic-assisted thoracic surgery (RATS) is unique in comparison with other minimally invasive techniques. Indeed, RATS is able to better integrate multiple streams of information including advanced imaging techniques, in an immersive experience at the level of the robotic console. Our aim was to describe three-dimensional (3D) imaging throughout the surgical procedure from preoperative planning to intraoperative assistance and complementary investigations such as radial endobronchial ultrasound (R-EBUS) and virtual bronchoscopy for pleural dye marking. All cases were operated using the DaVinci System TM . Modelisation was provided by Visible Patient™ (Strasbourg, France). Image integration in the operative field was achieved using the Tile Pro multi display input of the DaVinci console. Our experience was based on 114 robotic segmentectomies performed between January 2012 and October 2017. The clinical value of 3D imaging integration was evaluated in 2014 in a pilot study. Progressively, we have reached the conclusion that the use of such an anatomic model improves the safety and reliability of procedures. The multimodal system including 3D imaging has been used in more than 40 patients so far and demonstrated a perfect operative anatomic accuracy. Currently, we are developing an original virtual reality experience by exploring 3D imaging models at the robotic console level. The act of operating is being transformed and the surgeon now oversees a complex system that improves decision making.
NASA Astrophysics Data System (ADS)
Chen, Y. L.
2015-12-01
Measurement technologies for velocity of river flow are divided into intrusive and nonintrusive methods. Intrusive method requires infield operations. The measuring process of intrusive methods are time consuming, and likely to cause damages of operator and instrument. Nonintrusive methods require fewer operators and can reduce instrument damages from directly attaching to the flow. Nonintrusive measurements may use radar or image velocimetry to measure the velocities at the surface of water flow. The image velocimetry, such as large scale particle image velocimetry (LSPIV) accesses not only the point velocity but the flow velocities in an area simultaneously. Flow properties of an area hold the promise of providing spatially information of flow fields. This study attempts to construct a mobile system UAV-LSPIV by using an unmanned aerial vehicle (UAV) with LSPIV to measure flows in fields. The mobile system consists of a six-rotor UAV helicopter, a Sony nex5T camera, a gimbal, an image transfer device, a ground station and a remote control device. The activate gimbal helps maintain the camera lens orthogonal to the water surface and reduce the extent of images being distorted. The image transfer device can monitor the captured image instantly. The operator controls the UAV by remote control device through ground station and can achieve the flying data such as flying height and GPS coordinate of UAV. The mobile system was then applied to field experiments. The deviation of velocities measured by UAV-LSPIV of field experiments and handhold Acoustic Doppler Velocimeter (ADV) is under 8%. The results of the field experiments suggests that the application of UAV-LSPIV can be effectively applied to surface flow studies.
Satou, Shouichi; Aoki, Taku; Kaneko, Junichi; Sakamoto, Yoshihiro; Hasegawa, Kiyoshi; Sugawara, Yasuhiko; Arai, Osamu; Mitake, Tsuyoshi; Miura, Koui; Kokudo, Norihiro
2014-02-01
Real-time virtual sonography is an innovative imaging technology that detects the spatial position of an ultrasound probe and immediately reconstructs a section of computed tomography (CT) and/or magnetic resonance in accordance with the ultrasound image, thereby allowing a real-time comparison of those modalities. A novel intraoperative navigation system for liver resection using real-time virtual sonography has been devised for the detection of tumors and navigation of the resection plane. Sixteen patients with hepatic malignancies (26 tumors in total) were involved in this study, and the system was used intraoperatively. The tumor size ranged 2 to 140 mm (23 mm in median). By the navigation system, operators could refer intraoperative ultrasound image displayed on the television monitor side-by-side with corresponding images of CT and/or magnetic resonance. In addition, the system overlaid preoperative simulation on the CT image and highlighted the extent of resection so as to navigate the resection plane. Because the system used electromagnetic power in the operation room, the feasibility and safety of the system was investigated as well as its validity. The system could be used uneventfully in each operation. All of the 26 tumors scheduled to be resected were detected by the navigation system. The weight of the resected specimen correlated with the preoperatively simulated volume (R = 0.995, P < .0001). The feasibility and safety of the navigation system were confirmed. The system should be helpful for intraoperative tumor detection and navigation of liver resection.
Software components for medical image visualization and surgical planning
NASA Astrophysics Data System (ADS)
Starreveld, Yves P.; Gobbi, David G.; Finnis, Kirk; Peters, Terence M.
2001-05-01
Purpose: The development of new applications in medical image visualization and surgical planning requires the completion of many common tasks such as image reading and re-sampling, segmentation, volume rendering, and surface display. Intra-operative use requires an interface to a tracking system and image registration, and the application requires basic, easy to understand user interface components. Rapid changes in computer and end-application hardware, as well as in operating systems and network environments make it desirable to have a hardware and operating system as an independent collection of reusable software components that can be assembled rapidly to prototype new applications. Methods: Using the OpenGL based Visualization Toolkit as a base, we have developed a set of components that implement the above mentioned tasks. The components are written in both C++ and Python, but all are accessible from Python, a byte compiled scripting language. The components have been used on the Red Hat Linux, Silicon Graphics Iris, Microsoft Windows, and Apple OS X platforms. Rigorous object-oriented software design methods have been applied to ensure hardware independence and a standard application programming interface (API). There are components to acquire, display, and register images from MRI, MRA, CT, Computed Rotational Angiography (CRA), Digital Subtraction Angiography (DSA), 2D and 3D ultrasound, video and physiological recordings. Interfaces to various tracking systems for intra-operative use have also been implemented. Results: The described components have been implemented and tested. To date they have been used to create image manipulation and viewing tools, a deep brain functional atlas, a 3D ultrasound acquisition and display platform, a prototype minimally invasive robotic coronary artery bypass graft planning system, a tracked neuro-endoscope guidance system and a frame-based stereotaxy neurosurgery planning tool. The frame-based stereotaxy module has been licensed and certified for use in a commercial image guidance system. Conclusions: It is feasible to encapsulate image manipulation and surgical guidance tasks in individual, reusable software modules. These modules allow for faster development of new applications. The strict application of object oriented software design methods allows individual components of such a system to make the transition from the research environment to a commercial one.
On Max-Plus Algebra and Its Application on Image Steganography
Santoso, Kiswara Agung
2018-01-01
We propose a new steganography method to hide an image into another image using matrix multiplication operations on max-plus algebra. This is especially interesting because the matrix used in encoding or information disguises generally has an inverse, whereas matrix multiplication operations in max-plus algebra do not have an inverse. The advantages of this method are the size of the image that can be hidden into the cover image, larger than the previous method. The proposed method has been tested on many secret images, and the results are satisfactory which have a high level of strength and a high level of security and can be used in various operating systems. PMID:29887761
On Max-Plus Algebra and Its Application on Image Steganography.
Santoso, Kiswara Agung; Fatmawati; Suprajitno, Herry
2018-01-01
We propose a new steganography method to hide an image into another image using matrix multiplication operations on max-plus algebra. This is especially interesting because the matrix used in encoding or information disguises generally has an inverse, whereas matrix multiplication operations in max-plus algebra do not have an inverse. The advantages of this method are the size of the image that can be hidden into the cover image, larger than the previous method. The proposed method has been tested on many secret images, and the results are satisfactory which have a high level of strength and a high level of security and can be used in various operating systems.
A compact 3 T all HTS cryogen-free MRI system
NASA Astrophysics Data System (ADS)
Parkinson, B. J.; Bouloukakis, K.; Slade, R. A.
2017-12-01
We have designed and built a passively shielded, cryogen-free 3 T 160 mm bore bismuth strontium calcium copper oxide HTS magnet with shielded gradient coils suitable for use in small animal imaging applications. The magnet is cooled to approximately 16 K using a two-stage cryocooler and is operated at 200 A. The magnet has been passively shimmed so as to achieve ±10 parts per million (ppm) homogeneity over a 60 mm diameter imaging volume. We have demonstrated that B 0 temporal stability is fit-for-purpose despite the magnet operating in the driven mode. The system has produced good quality spin-echo and gradient echo images. This compact HTS-MRI system is emerging as a true alternative to conventional low temperature superconductor based cryogen-free MRI systems, with much more efficient cryogenics since it operates entirely from a single phase alternating current electrical supply.
The Goddard Space Flight Center Program to develop parallel image processing systems
NASA Technical Reports Server (NTRS)
Schaefer, D. H.
1972-01-01
Parallel image processing which is defined as image processing where all points of an image are operated upon simultaneously is discussed. Coherent optical, noncoherent optical, and electronic methods are considered parallel image processing techniques.
A novel chaotic image encryption scheme using DNA sequence operations
NASA Astrophysics Data System (ADS)
Wang, Xing-Yuan; Zhang, Ying-Qian; Bao, Xue-Mei
2015-10-01
In this paper, we propose a novel image encryption scheme based on DNA (Deoxyribonucleic acid) sequence operations and chaotic system. Firstly, we perform bitwise exclusive OR operation on the pixels of the plain image using the pseudorandom sequences produced by the spatiotemporal chaos system, i.e., CML (coupled map lattice). Secondly, a DNA matrix is obtained by encoding the confused image using a kind of DNA encoding rule. Then we generate the new initial conditions of the CML according to this DNA matrix and the previous initial conditions, which can make the encryption result closely depend on every pixel of the plain image. Thirdly, the rows and columns of the DNA matrix are permuted. Then, the permuted DNA matrix is confused once again. At last, after decoding the confused DNA matrix using a kind of DNA decoding rule, we obtain the ciphered image. Experimental results and theoretical analysis show that the scheme is able to resist various attacks, so it has extraordinarily high security.
Design of embedded endoscopic ultrasonic imaging system
NASA Astrophysics Data System (ADS)
Li, Ming; Zhou, Hao; Wen, Shijie; Chen, Xiodong; Yu, Daoyin
2008-12-01
Endoscopic ultrasonic imaging system is an important component in the endoscopic ultrasonography system (EUS). Through the ultrasonic probe, the characteristics of the fault histology features of digestive organs is detected by EUS, and then received by the reception circuit which making up of amplifying, gain compensation, filtering and A/D converter circuit, in the form of ultrasonic echo. Endoscopic ultrasonic imaging system is the back-end processing system of the EUS, with the function of receiving digital ultrasonic echo modulated by the digestive tract wall from the reception circuit, acquiring and showing the fault histology features in the form of image and characteristic data after digital signal processing, such as demodulation, etc. Traditional endoscopic ultrasonic imaging systems are mainly based on image acquisition and processing chips, which connecting to personal computer with USB2.0 circuit, with the faults of expensive, complicated structure, poor portability, and difficult to popularize. To against the shortcomings above, this paper presents the methods of digital signal acquisition and processing specially based on embedded technology with the core hardware structure of ARM and FPGA for substituting the traditional design with USB2.0 and personal computer. With built-in FIFO and dual-buffer, FPGA implement the ping-pong operation of data storage, simultaneously transferring the image data into ARM through the EBI bus by DMA function, which is controlled by ARM to carry out the purpose of high-speed transmission. The ARM system is being chosen to implement the responsibility of image display every time DMA transmission over and actualizing system control with the drivers and applications running on the embedded operating system Windows CE, which could provide a stable, safe and reliable running platform for the embedded device software. Profiting from the excellent graphical user interface (GUI) and good performance of Windows CE, we can not only clearly show 511×511 pixels ultrasonic echo images through application program, but also provide a simple and friendly operating interface with mouse and touch screen which is more convenient than the traditional endoscopic ultrasonic imaging system. Including core and peripheral circuits of FPGA and ARM, power network circuit and LCD display circuit, we designed the whole embedded system, achieving the desired purpose by implementing ultrasonic image display properly after the experimental verification, solving the problem of hugeness and complexity of the traditional endoscopic ultrasonic imaging system.
James W. Hoffman; Lloyd L. Coulter; Philip J Riggan
2005-01-01
The new FireMapper® 2.0 and OilMapper airborne, infrared imaging systems operate in a "snapshot" mode. Both systems feature the real time display of single image frames, in any selected spectral band, on a daylight readable tablet PC. These single frames are displayed to the operator with full temperature calibration in color or grayscale renditions. A rapid...
Spot restoration for GPR image post-processing
Paglieroni, David W; Beer, N. Reginald
2014-05-20
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.
Buried object detection in GPR images
Paglieroni, David W; Chambers, David H; Bond, Steven W; Beer, W. Reginald
2014-04-29
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.
Delivering images to the operating room: a web-based solution.
Bennett, W F; Tunstall, K M; Skinner, P W; Spigos, D G
2002-01-01
As radiology departments become filmless, they are discovering that some areas are particularly difficult to deliver images. Many departments have found that the operating room is one such area. There are space constraints and difficulty in manipulating the images by a sterile surgeon. This report describes one method to overcome this obstacle. The author's institution has been using picture archiving and communication system (PACS) for approximately 3 years, and it has been a filmless department for 1 year. The PACS transfers images to a webserver for distribution throughout the hospital. It is accessed by Internet Explorer without any additional software. The authors recently started a pilot program in which they installed dual panel flat screen monitors in 6 operating rooms. The computers are connected to the hospital backbone by ethernet. Graphic cards installed in the computers allow the use of dual monitors. Because the surgeons were experienced in viewing cases on the enterprise web system, they had little difficulty in adapting to the operating room (OR) system. Initial reception of the system is positive. The use of the web system was found to be superior by the surgeons because of the flexibility and manipulation of the images compared with film. Images can be magnified to facilitate viewing from across the room. The ultimate goal of electronic radiology is to replace hardcopy film in all aspects. One area that PACS has difficulty in accomplishing this goal is in the operating room. Most institutions have continued to print film for the OR. The authors have initiated a project that may allow web viewing in the OR. Because of limited space in the OR, an additional computer was undesirable. The CPU tower, keyboard, and mouse were mounted on a frame on the wall. The images were displayed on 2 flat screen monitors, which simulated the viewboxes traditionally used by the surgeons. Interviews with the surgeons have found both positive and negative aspects of the system. Overall impression is good, but the timeliness of the intraoperative films needs to be improved. The author's pilot project of installing a web-based display system in the operating room still is being evaluated. Their initial results have been positive, and if there are no major problems that arise the project will be expanded. These results show that it is possible to provide image delivery to the OR over the intranet that is acceptable to the surgeons.
The Design of a Portable and Deployable Solar Energy System for Deployed Military Applications
2011-04-01
Abstract- Global Positioning Systems, thermal imaging scopes, satellite phones, and other electronic devices are critical to the warfighter in... imaging scopes, satellite phones, and other electronic devices are critical to the warfighter in Forward Operating Environments. Many are battery operated...Technology & Engineering 24. Kumar, Shrawan, Mital, Anil, Electromyography in ergonomics 25. Stanton, Neville Human factors in consumer products, CRC
SITHON: An Airborne Fire Detection System Compliant with Operational Tactical Requirements
Kontoes, Charalabos; Keramitsoglou, Iphigenia; Sifakis, Nicolaos; Konstantinidis, Pavlos
2009-01-01
In response to the urging need of fire managers for timely information on fire location and extent, the SITHON system was developed. SITHON is a fully digital thermal imaging system, integrating INS/GPS and a digital camera, designed to provide timely positioned and projected thermal images and video data streams rapidly integrated in the GIS operated by Crisis Control Centres. This article presents in detail the hardware and software components of SITHON, and demonstrates the first encouraging results of test flights over the Sithonia Peninsula in Northern Greece. It is envisaged that the SITHON system will be soon operated onboard various airborne platforms including fire brigade airplanes and helicopters as well as on UAV platforms owned and operated by the Greek Air Forces. PMID:22399963
A novel chaos-based image encryption algorithm using DNA sequence operations
NASA Astrophysics Data System (ADS)
Chai, Xiuli; Chen, Yiran; Broyde, Lucie
2017-01-01
An image encryption algorithm based on chaotic system and deoxyribonucleic acid (DNA) sequence operations is proposed in this paper. First, the plain image is encoded into a DNA matrix, and then a new wave-based permutation scheme is performed on it. The chaotic sequences produced by 2D Logistic chaotic map are employed for row circular permutation (RCP) and column circular permutation (CCP). Initial values and parameters of the chaotic system are calculated by the SHA 256 hash of the plain image and the given values. Then, a row-by-row image diffusion method at DNA level is applied. A key matrix generated from the chaotic map is used to fuse the confused DNA matrix; also the initial values and system parameters of the chaotic system are renewed by the hamming distance of the plain image. Finally, after decoding the diffused DNA matrix, we obtain the cipher image. The DNA encoding/decoding rules of the plain image and the key matrix are determined by the plain image. Experimental results and security analyses both confirm that the proposed algorithm has not only an excellent encryption result but also resists various typical attacks.
GOES I/M image navigation and registration
NASA Technical Reports Server (NTRS)
Fiorello, J. L., Jr.; Oh, I. H.; Kelly, K. A.; Ranne, L.
1989-01-01
Image Navigation and Registration (INR) is the system that will be used on future Geostationary Operational Environmental Satellite (GOES) missions to locate and register radiometric imagery data. It consists of a semiclosed loop system with a ground-based segment that generates coefficients to perform image motion compensation (IMC). The IMC coefficients are uplinked to the satellite-based segment, where they are used to adjust the displacement of the imagery data due to movement of the imaging instrument line-of-sight. The flight dynamics aspects of the INR system is discussed in terms of the attitude and orbit determination, attitude pointing, and attitude and orbit control needed to perform INR. The modeling used in the determination of orbit and attitude is discussed, along with the method of on-orbit control used in the INR system, and various factors that affect stability. Also discussed are potential error sources inherent in the INR system and the operational methods of compensating for these errors.
Uhl, Eberhard; Zausinger, Stefan; Morhard, Dominik; Heigl, Thomas; Scheder, Benjamin; Rachinger, Walter; Schichor, Christian; Tonn, Jörg-Christian
2009-05-01
We report our preliminary experience in a prospective series of patients with regard to feasibility, work flow, and image quality using a multislice computed tomographic (CT) scanner combined with a frameless neuronavigation system (NNS). A sliding gantry 40-slice CT scanner was installed in a preexisting operating room. The scanner was connected to a frameless infrared-based NNS. Image data was transferred directly from the scanner into the navigation system. This allowed updating of the NNS during surgery by automated image registration based on the position of the gantry. Intraoperative CT angiography was possible. The patient was positioned on a radiolucent operating table that fits within the bore of the gantry. During image acquisition, the gantry moved over the patient. This table allowed all positions and movements like any normal operating table without compromising the positioning of the patient. For cranial surgery, a carbon-made radiolucent head clamp was fixed to the table. Experience with the first 230 patients confirms the feasibility of intraoperative CT scanning (136 patients with intracranial pathology, 94 patients with spinal lesions). After a specific work flow, interruption of surgery for intraoperative scanning can be limited to 10 to 15 minutes in cranial surgery and to 9 minutes in spinal surgery. Intraoperative imaging changed the course of surgery in 16 of the 230 cases either because control CT scans showed suboptimal screw position (17 of 307 screws, with 9 in 7 patients requiring correction) or that tumor resection was insufficient (9 cases). Intraoperative CT angiography has been performed in 7 cases so far with good image quality to determine residual flow in an aneurysm. Image quality was excellent in spinal and cranial base surgery. The system can be installed in a preexisting operating environment without the need for special surgical instruments. It increases the safety of the patient and the surgeon without necessitating a change in the existing surgical protocol and work flow. Imaging and updating of the NNS can be performed at any time during surgery with very limited time and modification of the surgical setup. Multidisciplinary use increases utilization of the system and thus improves the cost-efficiency relationship.
Real-time system for imaging and object detection with a multistatic GPR array
Paglieroni, David W; Beer, N Reginald; Bond, Steven W; Top, Philip L; Chambers, David H; Mast, Jeffrey E; Donetti, John G; Mason, Blake C; Jones, Steven M
2014-10-07
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.
A Real-Time Image Acquisition And Processing System For A RISC-Based Microcomputer
NASA Astrophysics Data System (ADS)
Luckman, Adrian J.; Allinson, Nigel M.
1989-03-01
A low cost image acquisition and processing system has been developed for the Acorn Archimedes microcomputer. Using a Reduced Instruction Set Computer (RISC) architecture, the ARM (Acorn Risc Machine) processor provides instruction speeds suitable for image processing applications. The associated improvement in data transfer rate has allowed real-time video image acquisition without the need for frame-store memory external to the microcomputer. The system is comprised of real-time video digitising hardware which interfaces directly to the Archimedes memory, and software to provide an integrated image acquisition and processing environment. The hardware can digitise a video signal at up to 640 samples per video line with programmable parameters such as sampling rate and gain. Software support includes a work environment for image capture and processing with pixel, neighbourhood and global operators. A friendly user interface is provided with the help of the Archimedes Operating System WIMP (Windows, Icons, Mouse and Pointer) Manager. Windows provide a convenient way of handling images on the screen and program control is directed mostly by pop-up menus.
VICAR image processing system guide to system use
NASA Technical Reports Server (NTRS)
Seidman, J. B.
1977-01-01
The functional characteristics and operating requirements of the VICAR (Video Image Communication and Retrieval) system are described. An introduction to the system describes the functional characteristics and the basic theory of operation. A brief description of the data flow as well as tape and disk formats is also presented. A formal presentation of the control statement formats is given along with a guide to usage of the system. The guide provides a step-by-step reference to the creation of a VICAR control card deck. Simple examples are employed to illustrate the various options and the system response thereto.
Robust algebraic image enhancement for intelligent control systems
NASA Technical Reports Server (NTRS)
Lerner, Bao-Ting; Morrelli, Michael
1993-01-01
Robust vision capability for intelligent control systems has been an elusive goal in image processing. The computationally intensive techniques a necessary for conventional image processing make real-time applications, such as object tracking and collision avoidance difficult. In order to endow an intelligent control system with the needed vision robustness, an adequate image enhancement subsystem capable of compensating for the wide variety of real-world degradations, must exist between the image capturing and the object recognition subsystems. This enhancement stage must be adaptive and must operate with consistency in the presence of both statistical and shape-based noise. To deal with this problem, we have developed an innovative algebraic approach which provides a sound mathematical framework for image representation and manipulation. Our image model provides a natural platform from which to pursue dynamic scene analysis, and its incorporation into a vision system would serve as the front-end to an intelligent control system. We have developed a unique polynomial representation of gray level imagery and applied this representation to develop polynomial operators on complex gray level scenes. This approach is highly advantageous since polynomials can be manipulated very easily, and are readily understood, thus providing a very convenient environment for image processing. Our model presents a highly structured and compact algebraic representation of grey-level images which can be viewed as fuzzy sets.
NASA Astrophysics Data System (ADS)
Hirayama, Ryuji; Shiraki, Atsushi; Nakayama, Hirotaka; Kakue, Takashi; Shimobaba, Tomoyoshi; Ito, Tomoyoshi
2017-07-01
We designed and developed a control circuit for a three-dimensional (3-D) light-emitting diode (LED) array to be used in volumetric displays exhibiting full-color dynamic 3-D images. The circuit was implemented on a field-programmable gate array; therefore, pulse-width modulation, which requires high-speed processing, could be operated in real time. We experimentally evaluated the developed system by measuring the luminance of an LED with varying input and confirmed that the system works appropriately. In addition, we demonstrated that the volumetric display exhibits different full-color dynamic two-dimensional images in two orthogonal directions. Each of the exhibited images could be obtained only from the prescribed viewpoint. Such directional characteristics of the system are beneficial for applications, including digital signage, security systems, art, and amusement.
Cao, Jianfang; Chen, Lichao; Wang, Min; Tian, Yun
2018-01-01
The Canny operator is widely used to detect edges in images. However, as the size of the image dataset increases, the edge detection performance of the Canny operator decreases and its runtime becomes excessive. To improve the runtime and edge detection performance of the Canny operator, in this paper, we propose a parallel design and implementation for an Otsu-optimized Canny operator using a MapReduce parallel programming model that runs on the Hadoop platform. The Otsu algorithm is used to optimize the Canny operator's dual threshold and improve the edge detection performance, while the MapReduce parallel programming model facilitates parallel processing for the Canny operator to solve the processing speed and communication cost problems that occur when the Canny edge detection algorithm is applied to big data. For the experiments, we constructed datasets of different scales from the Pascal VOC2012 image database. The proposed parallel Otsu-Canny edge detection algorithm performs better than other traditional edge detection algorithms. The parallel approach reduced the running time by approximately 67.2% on a Hadoop cluster architecture consisting of 5 nodes with a dataset of 60,000 images. Overall, our approach system speeds up the system by approximately 3.4 times when processing large-scale datasets, which demonstrates the obvious superiority of our method. The proposed algorithm in this study demonstrates both better edge detection performance and improved time performance.
Analysis of simulated image sequences from sensors for restricted-visibility operations
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar
1991-01-01
A real time model of the visible output from a 94 GHz sensor, based on a radiometric simulation of the sensor, was developed. A sequence of images as seen from an aircraft as it approaches for landing was simulated using this model. Thirty frames from this sequence of 200 x 200 pixel images were analyzed to identify and track objects in the image using the Cantata image processing package within the visual programming environment provided by the Khoros software system. The image analysis operations are described.
The Impact of the Condenser on Cytogenetic Image Quality in Digital Microscope System
Ren, Liqiang; Li, Zheng; Li, Yuhua; Zheng, Bin; Li, Shibo; Chen, Xiaodong; Liu, Hong
2013-01-01
Background: Optimizing operational parameters of the digital microscope system is an important technique to acquire high quality cytogenetic images and facilitate the process of karyotyping so that the efficiency and accuracy of diagnosis can be improved. OBJECTIVE: This study investigated the impact of the condenser on cytogenetic image quality and system working performance using a prototype digital microscope image scanning system. Methods: Both theoretical analysis and experimental validations through objectively evaluating a resolution test chart and subjectively observing large numbers of specimen were conducted. Results: The results show that the optimal image quality and large depth of field (DOF) are simultaneously obtained when the numerical aperture of condenser is set as 60%–70% of the corresponding objective. Under this condition, more analyzable chromosomes and diagnostic information are obtained. As a result, the system shows higher working stability and less restriction for the implementation of algorithms such as autofocusing especially when the system is designed to achieve high throughput continuous image scanning. Conclusions: Although the above quantitative results were obtained using a specific prototype system under the experimental conditions reported in this paper, the presented evaluation methodologies can provide valuable guidelines for optimizing operational parameters in cytogenetic imaging using the high throughput continuous scanning microscopes in clinical practice. PMID:23676284
A Practical and Portable Solids-State Electronic Terahertz Imaging System
Smart, Ken; Du, Jia; Li, Li; Wang, David; Leslie, Keith; Ji, Fan; Li, Xiang Dong; Zeng, Da Zhang
2016-01-01
A practical compact solid-state terahertz imaging system is presented. Various beam guiding architectures were explored and hardware performance assessed to improve its compactness, robustness, multi-functionality and simplicity of operation. The system performance in terms of image resolution, signal-to-noise ratio, the electronic signal modulation versus optical chopper, is evaluated and discussed. The system can be conveniently switched between transmission and reflection mode according to the application. A range of imaging application scenarios was explored and images of high visual quality were obtained in both transmission and reflection mode. PMID:27110791
Transonic applications of the Wake Imaging System
NASA Astrophysics Data System (ADS)
Crowder, J. P.
1982-09-01
The extension of a rapid flow field survey method (wake imaging system) originally developed for low speed wind tunnel operation, to transonic wind tunnel applications is discussed. The advantage of the system, beside the simplicity and low cost of the data acquisition system, is that the probe position data are recorded as an optical image of the actual sensor and thus are unaffected by the inevitable deflections of the probe support. This permits traversing systems which are deliberately flexible and have unusual motions. Two transverse drive systems are described and several typical data images are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rauch, Phillip; Lin, Pei-Jan Paul; Balter, Stephen
2012-05-15
Task Group 125 (TG 125) was charged with investigating the functionality of fluoroscopic automatic dose rate and image quality control logic in modern angiographic systems, paying specific attention to the spectral shaping filters and variations in the selected radiologic imaging parameters. The task group was also charged with describing the operational aspects of the imaging equipment for the purpose of assisting the clinical medical physicist with clinical set-up and performance evaluation. Although there are clear distinctions between the fluoroscopic operation of an angiographic system and its acquisition modes (digital cine, digital angiography, digital subtraction angiography, etc.), the scope of thismore » work was limited to the fluoroscopic operation of the systems studied. The use of spectral shaping filters in cardiovascular and interventional angiography equipment has been shown to reduce patient dose. If the imaging control algorithm were programmed to work in conjunction with the selected spectral filter, and if the generator parameters were optimized for the selected filter, then image quality could also be improved. Although assessment of image quality was not included as part of this report, it was recognized that for fluoroscopic imaging the parameters that influence radiation output, differential absorption, and patient dose are also the same parameters that influence image quality. Therefore, this report will utilize the terminology ''automatic dose rate and image quality'' (ADRIQ) when describing the control logic in modern interventional angiographic systems and, where relevant, will describe the influence of controlled parameters on the subsequent image quality. A total of 22 angiography units were investigated by the task group and of these one each was chosen as representative of the equipment manufactured by GE Healthcare, Philips Medical Systems, Shimadzu Medical USA, and Siemens Medical Systems. All equipment, for which measurement data were included in this report, was manufactured within the three year period from 2006 to 2008. Using polymethylmethacrylate (PMMA) plastic to simulate patient attenuation, each angiographic imaging system was evaluated by recording the following parameters: tube potential in units of kilovolts peak (kVp), tube current in units of milliamperes (mA), pulse width (PW) in units of milliseconds (ms), spectral filtration setting, and patient air kerma rate (PAKR) as a function of the attenuator thickness. Data were graphically plotted to reveal the manner in which the ADRIQ control logic responded to changes in object attenuation. There were similarities in the manner in which the ADRIQ control logic operated that allowed the four chosen devices to be divided into two groups, with two of the systems in each group. There were also unique approaches to the ADRIQ control logic that were associated with some of the systems, and these are described in the report. The evaluation revealed relevant information about the testing procedure and also about the manner in which different manufacturers approach the utilization of spectral filtration, pulsed fluoroscopy, and maximum PAKR limitation. This information should be particularly valuable to the clinical medical physicist charged with acceptance testing and performance evaluation of modern angiographic systems.« less
Rauch, Phillip; Lin, Pei-Jan Paul; Balter, Stephen; Fukuda, Atsushi; Goode, Allen; Hartwell, Gary; LaFrance, Terry; Nickoloff, Edward; Shepard, Jeff; Strauss, Keith
2012-05-01
Task Group 125 (TG 125) was charged with investigating the functionality of fluoroscopic automatic dose rate and image quality control logic in modern angiographic systems, paying specific attention to the spectral shaping filters and variations in the selected radiologic imaging parameters. The task group was also charged with describing the operational aspects of the imaging equipment for the purpose of assisting the clinical medical physicist with clinical set-up and performance evaluation. Although there are clear distinctions between the fluoroscopic operation of an angiographic system and its acquisition modes (digital cine, digital angiography, digital subtraction angiography, etc.), the scope of this work was limited to the fluoroscopic operation of the systems studied. The use of spectral shaping filters in cardiovascular and interventional angiography equipment has been shown to reduce patient dose. If the imaging control algorithm were programmed to work in conjunction with the selected spectral filter, and if the generator parameters were optimized for the selected filter, then image quality could also be improved. Although assessment of image quality was not included as part of this report, it was recognized that for fluoroscopic imaging the parameters that influence radiation output, differential absorption, and patient dose are also the same parameters that influence image quality. Therefore, this report will utilize the terminology "automatic dose rate and image quality" (ADRIQ) when describing the control logic in modern interventional angiographic systems and, where relevant, will describe the influence of controlled parameters on the subsequent image quality. A total of 22 angiography units were investigated by the task group and of these one each was chosen as representative of the equipment manufactured by GE Healthcare, Philips Medical Systems, Shimadzu Medical USA, and Siemens Medical Systems. All equipment, for which measurement data were included in this report, was manufactured within the three year period from 2006 to 2008. Using polymethylmethacrylate (PMMA) plastic to simulate patient attenuation, each angiographic imaging system was evaluated by recording the following parameters: tube potential in units of kilovolts peak (kVp), tube current in units of milliamperes (mA), pulse width (PW) in units of milliseconds (ms), spectral filtration setting, and patient air kerma rate (PAKR) as a function of the attenuator thickness. Data were graphically plotted to reveal the manner in which the ADRIQ control logic responded to changes in object attenuation. There were similarities in the manner in which the ADRIQ control logic operated that allowed the four chosen devices to be divided into two groups, with two of the systems in each group. There were also unique approaches to the ADRIQ control logic that were associated with some of the systems, and these are described in the report. The evaluation revealed relevant information about the testing procedure and also about the manner in which different manufacturers approach the utilization of spectral filtration, pulsed fluoroscopy, and maximum PAKR limitation. This information should be particularly valuable to the clinical medical physicist charged with acceptance testing and performance evaluation of modern angiographic systems.
Nagi, Sana Ehsen; Khan, Farhan Raza
2017-01-01
With root canal treatment, the organic debris and micro-organisms from pulp space is removed and an ideal canal preparation is achieved that is conducive of hermetic obturation. The purpose of this study was to correlate the pre-operative canal curvature with the postoperative curvature in human extracted teeth prepared with K-3 rotary systems. The root canal preparation was carried out on extracted human molars and premolars using K-3 endodontic rotary files. A pre and post-operative image of the teeth using digital radiograph were taken in order to compare pre and post-operative canal curvature. The images were saved in an images retrieval system (Gendex software, USA). Change in the canal curvature was measured using the software measuring tool (Vixwin software, USA). Student paired t-test and Pearson correlation test was applied at 0.05 level of significance. There is a statistically significant difference between pre-operative and post-operative canal curvature (p-value <0.001) and a strong positive correlation (91% correlation) between pre-operative and post-operative canal curvature in teeth prepared with the K-3 rotary files. A significant difference between pre and post instrumentation curvature was found. Degree of canal curvature was not correlated with time taken for canal preparation.
47 CFR 15.510 - Technical requirements for through D-wall imaging systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Technical requirements for through D-wall imaging systems. 15.510 Section 15.510 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Ultra-Wideband Operation § 15.510 Technical requirements for through D-wall imaging...
NASA Astrophysics Data System (ADS)
Chan, Kenneth H.; Fried, Nathaniel M.; Fried, Daniel
2018-02-01
Previous studies have shown that reflectance imaging at wavelengths greater than 1200-nm can be used to image demineralization on tooth occlusal surfaces with high contrast and without the interference of stains. In addition, these near-IR imaging systems can be integrated with laser ablation systems for the selective removal of carious lesions. Higher wavelengths, such as 1950-nm, yield higher lesion contrast due to higher water absorption and lower scattering. In this study, a point-to-point scanning system employing diode and fiber lasers operating at 1450, 1860, 1880, and 1950-nm was used to acquire reflected light images of the tooth surface. Artificial lesions were imaged at these wavelengths to determine the highest lesion contrast. Near-IR images at 1880-nm were used to demarcate lesion areas for subsequent selective carious lesion removal using a new compact air-cooled CO2 laser prototype operating at 9.3-μm. The highest lesion contrast was at 1950-nm and the dual NIR/CO2 laser system selectively removed the simulated lesions with a mean loss of only 12-μm of sound enamel.
Using compressive measurement to obtain images at ultra low-light-level
NASA Astrophysics Data System (ADS)
Ke, Jun; Wei, Ping
2013-08-01
In this paper, a compressive imaging architecture is used for ultra low-light-level imaging. In such a system, features, instead of object pixels, are imaged onto a photocathode, and then magnified by an image intensifier. By doing so, system measurement SNR is increased significantly. Therefore, the new system can image objects at ultra low-ligh-level, while a conventional system has difficulty. PCA projection is used to collect feature measurements in this work. Linear Wiener operator and nonlinear method based on FoE model are used to reconstruct objects. Root mean square error (RMSE) is used to quantify system reconstruction quality.
Control Method for Video Guidance Sensor System
NASA Technical Reports Server (NTRS)
Howard, Richard T. (Inventor); Book, Michael L. (Inventor); Bryan, Thomas C. (Inventor)
2005-01-01
A method is provided for controlling operations in a video guidance sensor system wherein images of laser output signals transmitted by the system and returned from a target are captured and processed by the system to produce data used in tracking of the target. Six modes of operation are provided as follows: (i) a reset mode; (ii) a diagnostic mode; (iii) a standby mode; (iv) an acquisition mode; (v) a tracking mode; and (vi) a spot mode wherein captured images of returned laser signals are processed to produce data for all spots found in the image. The method provides for automatic transition to the standby mode from the reset mode after integrity checks are performed and from the diagnostic mode to the reset mode after diagnostic operations are commands is permitted only when the system is in the carried out. Further, acceptance of reset and diagnostic standby mode. The method also provides for automatic transition from the acquisition mode to the tracking mode when an acceptable target is found.
Control method for video guidance sensor system
NASA Technical Reports Server (NTRS)
Howard, Richard T. (Inventor); Book, Michael L. (Inventor); Bryan, Thomas C. (Inventor)
2005-01-01
A method is provided for controlling operations in a video guidance sensor system wherein images of laser output signals transmitted by the system and returned from a target are captured and processed by the system to produce data used in tracking of the target. Six modes of operation are provided as follows: (i) a reset mode; (ii) a diagnostic mode; (iii) a standby mode; (iv) an acquisition mode; (v) a tracking mode; and (vi) a spot mode wherein captured images of returned laser signals are processed to produce data for all spots found in the image. The method provides for automatic transition to the standby mode from the reset mode after integrity checks are performed and from the diagnostic mode to the reset mode after diagnostic operations are carried out. Further, acceptance of reset and diagnostic commands is permitted only when the system is in the standby mode. The method also provides for automatic transition from the acquisition mode to the tracking mode when an acceptable target is found.
An integrated compact airborne multispectral imaging system using embedded computer
NASA Astrophysics Data System (ADS)
Zhang, Yuedong; Wang, Li; Zhang, Xuguo
2015-08-01
An integrated compact airborne multispectral imaging system using embedded computer based control system was developed for small aircraft multispectral imaging application. The multispectral imaging system integrates CMOS camera, filter wheel with eight filters, two-axis stabilized platform, miniature POS (position and orientation system) and embedded computer. The embedded computer has excellent universality and expansibility, and has advantages in volume and weight for airborne platform, so it can meet the requirements of control system of the integrated airborne multispectral imaging system. The embedded computer controls the camera parameters setting, filter wheel and stabilized platform working, image and POS data acquisition, and stores the image and data. The airborne multispectral imaging system can connect peripheral device use the ports of the embedded computer, so the system operation and the stored image data management are easy. This airborne multispectral imaging system has advantages of small volume, multi-function, and good expansibility. The imaging experiment results show that this system has potential for multispectral remote sensing in applications such as resource investigation and environmental monitoring.
A programmable computational image sensor for high-speed vision
NASA Astrophysics Data System (ADS)
Yang, Jie; Shi, Cong; Long, Xitian; Wu, Nanjian
2013-08-01
In this paper we present a programmable computational image sensor for high-speed vision. This computational image sensor contains four main blocks: an image pixel array, a massively parallel processing element (PE) array, a row processor (RP) array and a RISC core. The pixel-parallel PE is responsible for transferring, storing and processing image raw data in a SIMD fashion with its own programming language. The RPs are one dimensional array of simplified RISC cores, it can carry out complex arithmetic and logic operations. The PE array and RP array can finish great amount of computation with few instruction cycles and therefore satisfy the low- and middle-level high-speed image processing requirement. The RISC core controls the whole system operation and finishes some high-level image processing algorithms. We utilize a simplified AHB bus as the system bus to connect our major components. Programming language and corresponding tool chain for this computational image sensor are also developed.
VA's Integrated Imaging System on three platforms.
Dayhoff, R E; Maloney, D L; Majurski, W J
1992-01-01
The DHCP Integrated Imaging System provides users with integrated patient data including text, image and graphics data. This system has been transferred from its original two screen DOS-based MUMPS platform to an X window workstation and a Microsoft Windows-based workstation. There are differences between these various platforms that impact on software design and on software development strategy. Data structures and conventions were used to isolate hardware, operating system, imaging software, and user-interface differences between platforms in the implementation of functionality for text and image display and interaction. The use of an object-oriented approach greatly increased system portability.
VA's Integrated Imaging System on three platforms.
Dayhoff, R. E.; Maloney, D. L.; Majurski, W. J.
1992-01-01
The DHCP Integrated Imaging System provides users with integrated patient data including text, image and graphics data. This system has been transferred from its original two screen DOS-based MUMPS platform to an X window workstation and a Microsoft Windows-based workstation. There are differences between these various platforms that impact on software design and on software development strategy. Data structures and conventions were used to isolate hardware, operating system, imaging software, and user-interface differences between platforms in the implementation of functionality for text and image display and interaction. The use of an object-oriented approach greatly increased system portability. PMID:1482983
The Landsat Data Continuity Mission Operational Land Imager (OLI) Radiometric Calibration
NASA Technical Reports Server (NTRS)
Markham, Brian L.; Dabney, Philip W.; Murphy-Morris, Jeanine E.; Knight, Edward J.; Kvaran, Geir; Barsi, Julia A.
2010-01-01
The Operational Land Imager (OLI) on the Landsat Data Continuity Mission (LDCM) has a comprehensive radiometric characterization and calibration program beginning with the instrument design, and extending through integration and test, on-orbit operations and science data processing. Key instrument design features for radiometric calibration include dual solar diffusers and multi-lamped on-board calibrators. The radiometric calibration transfer procedure from NIST standards has multiple checks on the radiometric scale throughout the process and uses a heliostat as part of the transfer to orbit of the radiometric calibration. On-orbit lunar imaging will be used to track the instruments stability and side slither maneuvers will be used in addition to the solar diffuser to flat field across the thousands of detectors per band. A Calibration Validation Team is continuously involved in the process from design to operations. This team uses an Image Assessment System (IAS), part of the ground system to characterize and calibrate the on-orbit data.
Doukas, Charalampos; Goudas, Theodosis; Fischer, Simon; Mierswa, Ingo; Chatziioannou, Aristotle; Maglogiannis, Ilias
2010-01-01
This paper presents an open image-mining framework that provides access to tools and methods for the characterization of medical images. Several image processing and feature extraction operators have been implemented and exposed through Web Services. Rapid-Miner, an open source data mining system has been utilized for applying classification operators and creating the essential processing workflows. The proposed framework has been applied for the detection of salient objects in Obstructive Nephropathy microscopy images. Initial classification results are quite promising demonstrating the feasibility of automated characterization of kidney biopsy images.
Morita, Akio; Sameshima, Tetsuro; Sora, Shigeo; Kimura, Toshikazu; Nishimura, Kengo; Itoh, Hirotaka; Shibahashi, Keita; Shono, Naoyuki; Machida, Toru; Hara, Naoko; Mikami, Nozomi; Harihara, Yasushi; Kawate, Ryoichi; Ochiai, Chikayuki; Wang, Weimin; Oguro, Toshiki
2014-06-01
Magnetic resonance imaging (MRI) during surgery has been shown to improve surgical outcomes, but the current intraoperative MRI systems are too large to install in standard operating suites. Although 1 compact system is available, its imaging quality is not ideal. We developed a new compact intraoperative MRI system and evaluated its use for safety and efficacy. This new system has a magnetic gantry: a permanent magnet of 0.23 T and an interpolar distance of 32 cm. The gantry system weighs 2.8 tons and the 5-G line is within the circle of 2.6 m. We created a new field-of-view head coil and a canopy-style radiofrequency shield for this system. A clinical trial was initiated, and the system has been used in 44 patients. This system is significantly smaller than previous intraoperative MRI systems. High-quality T2 images could discriminate tumor from normal brain tissue and identify anatomic landmarks for accurate surgery. The average imaging time was 45.5 minutes, and no clinical complications or MRI system failures occurred. Floating organisms or particles were minimal (1/200 L maximum). This intraoperative, compact, low-magnetic-field MRI system can be installed in standard operating suites to provide relatively high-quality images without sacrificing safety. We believe that such a system facilitates the introduction of the intraoperative MRI.
Thermal infrared panoramic imaging sensor
NASA Astrophysics Data System (ADS)
Gutin, Mikhail; Tsui, Eddy K.; Gutin, Olga; Wang, Xu-Ming; Gutin, Alexey
2006-05-01
Panoramic cameras offer true real-time, 360-degree coverage of the surrounding area, valuable for a variety of defense and security applications, including force protection, asset protection, asset control, security including port security, perimeter security, video surveillance, border control, airport security, coastguard operations, search and rescue, intrusion detection, and many others. Automatic detection, location, and tracking of targets outside protected area ensures maximum protection and at the same time reduces the workload on personnel, increases reliability and confidence of target detection, and enables both man-in-the-loop and fully automated system operation. Thermal imaging provides the benefits of all-weather, 24-hour day/night operation with no downtime. In addition, thermal signatures of different target types facilitate better classification, beyond the limits set by camera's spatial resolution. The useful range of catadioptric panoramic cameras is affected by their limited resolution. In many existing systems the resolution is optics-limited. Reflectors customarily used in catadioptric imagers introduce aberrations that may become significant at large camera apertures, such as required in low-light and thermal imaging. Advantages of panoramic imagers with high image resolution include increased area coverage with fewer cameras, instantaneous full horizon detection, location and tracking of multiple targets simultaneously, extended range, and others. The Automatic Panoramic Thermal Integrated Sensor (APTIS), being jointly developed by Applied Science Innovative, Inc. (ASI) and the Armament Research, Development and Engineering Center (ARDEC) combines the strengths of improved, high-resolution panoramic optics with thermal imaging in the 8 - 14 micron spectral range, leveraged by intelligent video processing for automated detection, location, and tracking of moving targets. The work in progress supports the Future Combat Systems (FCS) and the Intelligent Munitions Systems (IMS). The APTIS is anticipated to operate as an intelligent node in a wireless network of multifunctional nodes that work together to serve in a wide range of applications of homeland security, as well as serve the Army in tasks of improved situational awareness (SA) in defense and offensive operations, and as a sensor node in tactical Intelligence Surveillance Reconnaissance (ISR). The novel ViperView TM high-resolution panoramic thermal imager is the heart of the APTIS system. It features an aberration-corrected omnidirectional imager with small optics designed to match the resolution of a 640x480 pixels IR camera with improved image quality for longer range target detection, classification, and tracking. The same approach is applicable to panoramic cameras working in the visible spectral range. Other components of the ATPIS system include network communications, advanced power management, and wakeup capability. Recent developments include image processing, optical design being expanded into the visible spectral range, and wireless communications design. This paper describes the development status of the APTIS system.
Surveillance and reconnaissance ground system architecture
NASA Astrophysics Data System (ADS)
Devambez, Francois
2001-12-01
Modern conflicts induces various modes of deployment, due to the type of conflict, the type of mission, and phase of conflict. It is then impossible to define fixed architecture systems for surveillance ground segments. Thales has developed a structure for a ground segment based on the operational functions required, and on the definition of modules and networks. Theses modules are software and hardware modules, including communications and networks. This ground segment is called MGS (Modular Ground Segment), and is intended for use in airborne reconnaissance systems, surveillance systems, and U.A.V. systems. Main parameters for the definition of a modular ground image exploitation system are : Compliance with various operational configurations, Easy adaptation to the evolution of theses configurations, Interoperability with NATO and multinational forces, Security, Multi-sensors, multi-platforms capabilities, Technical modularity, Evolutivity Reduction of life cycle cost The general performances of the MGS are presented : type of sensors, acquisition process, exploitation of images, report generation, data base management, dissemination, interface with C4I. The MGS is then described as a set of hardware and software modules, and their organization to build numerous operational configurations. Architectures are from minimal configuration intended for a mono-sensor image exploitation system, to a full image intelligence center, for a multilevel exploitation of multi-sensor.
Multispectral imaging for medical diagnosis
NASA Technical Reports Server (NTRS)
Anselmo, V. J.
1977-01-01
Photography technique determines amount of morbidity present in tissue. Imaging apparatus incorporates numerical filtering. Overall system operates in near-real time. Information gained from this system enables physician to understand extent of injury and leads to accelerated treatment.
Spatially assisted down-track median filter for GPR image post-processing
Paglieroni, David W; Beer, N Reginald
2014-10-07
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.
Spatially adaptive migration tomography for multistatic GPR imaging
Paglieroni, David W; Beer, N. Reginald
2013-08-13
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.
Synthetic aperture integration (SAI) algorithm for SAR imaging
Chambers, David H; Mast, Jeffrey E; Paglieroni, David W; Beer, N. Reginald
2013-07-09
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.
Zero source insertion technique to account for undersampling in GPR imaging
Chambers, David H; Mast, Jeffrey E; Paglieroni, David W
2014-02-25
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.
Design and testing of a dual-band enhanced vision system
NASA Astrophysics Data System (ADS)
Way, Scott P.; Kerr, Richard; Imamura, Joseph J.; Arnoldy, Dan; Zeylmaker, Dick; Zuro, Greg
2003-09-01
An effective enhanced vision system must operate over a broad spectral range in order to offer a pilot an optimized scene that includes runway background as well as airport lighting and aircraft operations. The large dynamic range of intensities of these images is best handled with separate imaging sensors. The EVS 2000 is a patented dual-band Infrared Enhanced Vision System (EVS) utilizing image fusion concepts. It has the ability to provide a single image from uncooled infrared imagers combined with SWIR, NIR or LLLTV sensors. The system is designed to provide commercial and corporate airline pilots with improved situational awareness at night and in degraded weather conditions but can also be used in a variety of applications where the fusion of dual band or multiband imagery is required. A prototype of this system was recently fabricated and flown on the Boeing Advanced Technology Demonstrator 737-900 aircraft. This paper will discuss the current EVS 2000 concept, show results taken from the Boeing Advanced Technology Demonstrator program, and discuss future plans for the fusion system.
NASA Technical Reports Server (NTRS)
1992-01-01
To convert raw data into environmental products, the National Weather Service and other organizations use the Global 9000 image processing system marketed by Global Imaging, Inc. The company's GAE software package is an enhanced version of the TAE, developed by Goddard Space Flight Center to support remote sensing and image processing applications. The system can be operated in three modes and is combined with HP Apollo workstation hardware.
Synthetic Aperture Acoustic Imaging of Non-Metallic Cords
2012-04-01
Washington Headquarters Services , Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA, 22202-4302...collected with a research prototype synthetic aperture acoustic ( SAA ) imaging system. SAA imaging is an emerging technique that can serve as an...inexpensive alternative or logical complement to synthetic aperture radar (SAR). The SAA imaging system uses an acoustic transceiver (speaker and
Detailed description of the Mayo/IBM PACS
NASA Astrophysics Data System (ADS)
Gehring, Dale G.; Persons, Kenneth R.; Rothman, Melvyn L.; Salutz, James R.; Morin, Richard L.
1991-07-01
The Mayo Clinic and IBM/Rochester have jointly developed a picture archiving system (PACS) for use with Mayo's MRI and Neuro-CT imaging modalities. The system was developed to replace the imaging system's vendor-supplied magnetic tape archiving capability. The system consists of seven MR imagers and nine CT scanners, each interfaced to the PACS via IBM Personal System/2(tm) (PS/2) computers, which act as gateways from the imaging modality to the PACS network. The PAC system operates on the token-ring component of Mayo's city-wide local area network. Also on the PACS network are four optical storage subsystems used for image archival, three optical subsystems used for image retrieval, an IBM Application System/400(tm) (AS/400) computer used for database management and multiple PS/2-based image display systems and their image servers.
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.
1995-01-01
This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.
A scientific operations plan for the large space telescope. [ground support system design
NASA Technical Reports Server (NTRS)
West, D. K.
1977-01-01
The paper describes an LST ground system which is compatible with the operational requirements of the LST. The goal of the approach is to minimize the cost of post launch operations without seriously compromising the quality and total throughput of LST science. Attention is given to cost constraints and guidelines, the telemetry operations processing systems (TELOPS), the image processing facility, ground system planning and data flow, and scientific interfaces.
Multi-channel medical imaging system
Frangioni, John V
2013-12-31
A medical imaging system provides simultaneous rendering of visible light and fluorescent images. The system may employ dyes in a small-molecule form that remain in the subject's blood stream for several minutes, allowing real-time imaging of the subject's circulatory system superimposed upon a conventional, visible light image of the subject. The system may provide an excitation light source to excite the fluorescent substance and a visible light source for general illumination within the same optical guide used to capture images. The system may be configured for use in open surgical procedures by providing an operating area that is closed to ambient light. The systems described herein provide two or more diagnostic imaging channels for capture of multiple, concurrent diagnostic images and may be used where a visible light image may be usefully supplemented by two or more images that are independently marked for functional interest.
Multi-channel medical imaging system
Frangioni, John V.
2016-05-03
A medical imaging system provides simultaneous rendering of visible light and fluorescent images. The system may employ dyes in a small-molecule form that remain in a subject's blood stream for several minutes, allowing real-time imaging of the subject's circulatory system superimposed upon a conventional, visible light image of the subject. The system may provide an excitation light source to excite the fluorescent substance and a visible light source for general illumination within the same optical guide used to capture images. The system may be configured for use in open surgical procedures by providing an operating area that is closed to ambient light. The systems described herein provide two or more diagnostic imaging channels for capture of multiple, concurrent diagnostic images and may be used where a visible light image may be usefully supplemented by two or more images that are independently marked for functional interest.
Movable Cameras And Monitors For Viewing Telemanipulator
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Venema, Steven C.
1993-01-01
Three methods proposed to assist operator viewing telemanipulator on video monitor in control station when video image generated by movable video camera in remote workspace of telemanipulator. Monitors rotated or shifted and/or images in them transformed to adjust coordinate systems of scenes visible to operator according to motions of cameras and/or operator's preferences. Reduces operator's workload and probability of error by obviating need for mental transformations of coordinates during operation. Methods applied in outer space, undersea, in nuclear industry, in surgery, in entertainment, and in manufacturing.
NASA Astrophysics Data System (ADS)
Erickson-Bhatt, Sarah J.; Nolan, Ryan; Shemonski, Nathan D.; Adie, Steven G.; Putney, Jeffrey; Darga, Donald; McCormick, Daniel T.; Cittadine, Andrew; Marjanovic, Marina; Chaney, Eric J.; Monroy, Guillermo L.; South, Fredrick; Carney, P. Scott; Cradock, Kimberly A.; Liu, Z. George; Ray, Partha S.; Boppart, Stephen A.
2014-02-01
Breast-conserving surgery is a frequent option for women with stage I and II breast cancer, and with radiation treatment, can be as effective as a mastectomy. However, adequate margin detection remains a challenge, and too often additional surgeries are required. Optical coherence tomography (OCT) provides a potential method for real-time, high-resolution imaging of breast tissue during surgery. Intra-operative OCT imaging of excised breast tissues has been previously demonstrated by several groups. In this study, a novel handheld surgical probe-based OCT system is introduced, which was used by the surgeon to image in vivo, within the tumor cavity, and immediately following tumor removal in order to detect the presence of any remaining cancer. Following resection, study investigators imaged the excised tissue with the same probe for comparison. We present OCT images obtained from over 15 patients during lumpectomy and mastectomy surgeries. Images were compared to post-operative histopathology for diagnosis. OCT images with micron scale resolution show areas of heterogeneity and disorganized features indicative of malignancy, compared to more uniform regions of normal tissue. Video-rate acquisition shows the inside of the tumor cavity as the surgeon sweeps the probe along the walls of the surgical cavity. This demonstrates the potential of OCT for real-time assessment of surgical tumor margins and for reducing the unacceptably high re-operation rate for breast cancer patients.
Point-and-stare operation and high-speed image acquisition in real-time hyperspectral imaging
NASA Astrophysics Data System (ADS)
Driver, Richard D.; Bannon, David P.; Ciccone, Domenic; Hill, Sam L.
2010-04-01
The design and optical performance of a small-footprint, low-power, turnkey, Point-And-Stare hyperspectral analyzer, capable of fully automated field deployment in remote and harsh environments, is described. The unit is packaged for outdoor operation in an IP56 protected air-conditioned enclosure and includes a mechanically ruggedized fully reflective, aberration-corrected hyperspectral VNIR (400-1000 nm) spectrometer with a board-level detector optimized for point and stare operation, an on-board computer capable of full system data-acquisition and control, and a fully functioning internal hyperspectral calibration system for in-situ system spectral calibration and verification. Performance data on the unit under extremes of real-time survey operation and high spatial and high spectral resolution will be discussed. Hyperspectral acquisition including full parameter tracking is achieved by the addition of a fiber-optic based downwelling spectral channel for solar illumination tracking during hyperspectral acquisition and the use of other sensors for spatial and directional tracking to pinpoint view location. The system is mounted on a Pan-And-Tilt device, automatically controlled from the analyzer's on-board computer, making the HyperspecTM particularly adaptable for base security, border protection and remote deployments. A hyperspectral macro library has been developed to control hyperspectral image acquisition, system calibration and scene location control. The software allows the system to be operated in a fully automatic mode or under direct operator control through a GigE interface.
An Imaging System for Satellite Hypervelocity Impact Debris Characterization
NASA Astrophysics Data System (ADS)
Moraguez, M.; Liou, J.; Fitz-Coy, N.; Patankar, K.; Cowardin, H.
This paper discusses the design of an automated imaging system for size characterization of debris produced by the DebriSat hypervelocity impact test. The goal of the DebriSat project is to update satellite breakup models. A representative LEO satellite, DebriSat, was constructed and subjected to a hypervelocity impact test. The impact produced an estimated 85,000 debris fragments. The size distribution of these fragments is required to update the current satellite breakup models. An automated imaging system was developed for the size characterization of the debris fragments. The system uses images taken from various azimuth and elevation angles around the object to produce a 3D representation of the fragment via a space carving algorithm. The system consists of N point-and-shoot cameras attached to a rigid support structure that defines the elevation angle for each camera. The debris fragment is placed on a turntable that is incrementally rotated to desired azimuth angles. The number of images acquired can be varied based on the desired resolution. Appropriate background and lighting is used for ease of object detection. The system calibration and image acquisition process are automated to result in push-button operations. However, for quality assurance reasons, the system is semi-autonomous by design to ensure operator involvement. This paper describes the imaging system setup, calibration procedure, repeatability analysis, and the results of the debris characterization.
An Imaging System for Satellite Hypervelocity Impact Debris Characterization
NASA Technical Reports Server (NTRS)
Moraguez, Matthew; Patankar, Kunal; Fitz-Coy, Norman; Liou, J.-C.; Cowardin, Heather
2015-01-01
This paper discusses the design of an automated imaging system for size characterization of debris produced by the DebriSat hypervelocity impact test. The goal of the DebriSat project is to update satellite breakup models. A representative LEO satellite, DebriSat, was constructed and subjected to a hypervelocity impact test. The impact produced an estimated 85,000 debris fragments. The size distribution of these fragments is required to update the current satellite breakup models. An automated imaging system was developed for the size characterization of the debris fragments. The system uses images taken from various azimuth and elevation angles around the object to produce a 3D representation of the fragment via a space carving algorithm. The system consists of N point-and-shoot cameras attached to a rigid support structure that defines the elevation angle for each camera. The debris fragment is placed on a turntable that is incrementally rotated to desired azimuth angles. The number of images acquired can be varied based on the desired resolution. Appropriate background and lighting is used for ease of object detection. The system calibration and image acquisition process are automated to result in push-button operations. However, for quality assurance reasons, the system is semi-autonomous by design to ensure operator involvement. This paper describes the imaging system setup, calibration procedure, repeatability analysis, and the results of the debris characterization.
Unified Digital Image Display And Processing System
NASA Astrophysics Data System (ADS)
Horii, Steven C.; Maguire, Gerald Q.; Noz, Marilyn E.; Schimpf, James H.
1981-11-01
Our institution like many others, is faced with a proliferation of medical imaging techniques. Many of these methods give rise to digital images (e.g. digital radiography, computerized tomography (CT) , nuclear medicine and ultrasound). We feel that a unified, digital system approach to image management (storage, transmission and retrieval), image processing and image display will help in integrating these new modalities into the present diagnostic radiology operations. Future techniques are likely to employ digital images, so such a system could readily be expanded to include other image sources. We presently have the core of such a system. We can both view and process digital nuclear medicine (conventional gamma camera) images, positron emission tomography (PET) and CT images on a single system. Images from our recently installed digital radiographic unit can be added. Our paper describes our present system, explains the rationale for its configuration, and describes the directions in which it will expand.
Development of an Infrared Remote Sensing System for Continuous Monitoring of Stromboli Volcano
NASA Astrophysics Data System (ADS)
Harig, R.; Burton, M.; Rausch, P.; Jordan, M.; Gorgas, J.; Gerhard, J.
2009-04-01
In order to monitor gases emitted by Stromboli volcano in the Eolian archipelago, Italy, a remote sensing system based on Fourier-transform infrared spectroscopy has been developed and installed on the summit of Stromboli volcano. Hot rocks and lava are used as sources of infrared radiation. The system is based on an interferometer with a single detector element in combination with an azimuth-elevation scanning mirror system. The mirror system is used to align the field of view of the instrument. In addition, the system is equipped with an infrared camera. Two basic modes of operation have been implemented: The user may use the infrared image to align the system to a vent that is to be examined. In addition, the scanning system may be used for (hyperspectral) imaging of the scene. In this mode, the scanning mirror is set sequentially move to all positions within a region of interest which is defined by the operator using the image generated from the infrared camera. The spectral range used for the measurements is 1600 - 4200 cm-1 allowing the quantification of many gases such as CO, CO2, SO2, and HCl. The spectral resolution is 0.5 cm-1. In order to protect the optical, mechanical and electrical parts of the system from the volcanic gases, all components are contained in a gas-tight aluminium housing. The system is controlled via TCP/IP (data transfer by WLAN), allowing the user to operate it from a remote PC. The infrared image of the scene and measured spectra are transferred to and displayed by a remote PC at INGV or TUHH in real-time. However, the system is capable of autonomous operation on the volcano, once a measurement has been started. Measurements are stored by an internal embedded PC.
Time reversal acoustics for small targets using decomposition of the time reversal operator
NASA Astrophysics Data System (ADS)
Simko, Peter C.
The method of time reversal acoustics has been the focus of considerable interest over the last twenty years. Time reversal imaging methods have made consistent progress as effective methods for signal processing since the initial demonstration that physical time reversal methods can be used to form convergent wave fields on a localized target, even under conditions of severe multipathing. Computational time reversal methods rely on the properties of the so-called 'time reversal operator' in order to extract information about the target medium. Applications for which time reversal imaging have previously been explored include medical imaging, non-destructive evaluation, and mine detection. Emphasis in this paper will fall on two topics within the general field of computational time reversal imaging. First, we will examine previous work on developing a time reversal imaging algorithm based on the MUltiple SIgnal Classification (MUSIC) algorithm. MUSIC, though computationally very intensive, has demonstrated early promise in simulations using array-based methods applicable to true volumetric (three-dimensional) imaging. We will provide a simple algorithm through which the rank of the time reversal operator subspaces can be properly quantified so that the rank of the associated null subspace can be accurately estimated near the central pulse wavelength in broadband imaging. Second, we will focus on the scattering from small acoustically rigid two dimensional cylindrical targets of elliptical cross section. Analysis of the time reversal operator eigenmodes has been well-studied for symmetric response matrices associated with symmetric systems of scattering targets. We will expand these previous results to include more general scattering systems leading to asymmetric response matrices, for which the analytical complexity increases but the physical interpretation of the time reversal operator remains unchanged. For asymmetric responses, the qualitative properties of the time reversal operator eigenmodes remain consistent with those obtained from the more tightly constrained systems.
Dillman, Jonathan R; Chen, Shigao; Davenport, Matthew S; Zhao, Heng; Urban, Matthew W; Song, Pengfei; Watcharotone, Kuanwong; Carson, Paul L
2015-03-01
There is a paucity of data available regarding the repeatability and reproducibility of superficial shear wave speed (SWS) measurements at imaging depths relevant to the pediatric population. To assess the repeatability and reproducibility of superficial shear wave speed measurements acquired from elasticity phantoms at varying imaging depths using three imaging methods, two US systems and multiple operators. Soft and hard elasticity phantoms manufactured by Computerized Imaging Reference Systems Inc. (Norfolk, VA) were utilized for our investigation. Institution No. 1 used an Acuson S3000 US system (Siemens Medical Solutions USA, Malvern, PA) and three shear wave imaging method/transducer combinations, while institution No. 2 used an Aixplorer US system (SuperSonic Imagine, Bothell, WA) and two different transducers. Ten stiffness measurements were acquired from each phantom at three depths (1.0 cm, 2.5 cm and 4.0 cm) by four operators at each institution. Student's t-test was used to compare SWS measurements between imaging techniques, while SWS measurement agreement was assessed with two-way random effects single-measure intra-class correlation coefficients (ICCs) and coefficients of variation. Mixed model regression analysis determined the effect of predictor variables on SWS measurements. For the soft phantom, the average of mean SWS measurements across the various imaging methods and depths was 0.84 ± 0.04 m/s (mean ± standard deviation) for the Acuson S3000 system and 0.90 ± 0.02 m/s for the Aixplorer system (P = 0.003). For the hard phantom, the average of mean SWS measurements across the various imaging methods and depths was 2.14 ± 0.08 m/s for the Acuson S3000 system and 2.07 ± 0.03 m/s Aixplorer system (P > 0.05). The coefficients of variation were low (0.5-6.8%), and interoperator agreement was near-perfect (ICCs ≥ 0.99). Shear wave imaging method and imaging depth significantly affected measured SWS (P < 0.0001). Superficial shear wave speed measurements in elasticity phantoms demonstrate minimal variability across imaging method/transducer combinations, imaging depths and operators. The exact clinical significance of this variation is uncertain and may change according to organ and specific disease state.
NASA Astrophysics Data System (ADS)
Xin, Zhaowei; Wei, Dong; Li, Dapeng; Xie, Xingwang; Chen, Mingce; Zhang, Xinyu; Wang, Haiwei; Xie, Changsheng
2018-02-01
In this paper, a polarization difference liquid-crystal microlens array (PD-LCMLA) for three dimensional imaging application through turbid media is fabricated and demonstrated. This device is composed of a twisted nematic liquidcrystal cell (TNLCC), a polarizer and a liquid-crystal microlens array. The polarizer is sandwiched between the TNLCC and LCMLA to help the polarization difference system achieving the orthogonal polarization raw images. The prototyped camera for polarization difference imaging has been constructed by integrating the PD-LCMLA with an image sensor. The orthogonally polarized light-field images are recorded by switching the working state of the TNLCC. Here, by using a special microstructure in conjunction with the polarization-difference algorithm, we demonstrate that the three-dimensional information in the scattering media can be retrieved from the polarization-difference imaging system with an electrically tunable PD-LCMLA. We further investigate the system's potential function based on the flexible microstructure. The microstructure provides a wide operation range in the manipulation of incident beams and also emerges multiple operation modes for imaging applications, such as conventional planar imaging, polarization imaging mode, and polarization-difference imaging mode. Since the PD-LCMLA demonstrates a very low power consumption, multiple imaging modes and simple manufacturing, this kind of device presents a potential to be used in many other optical and electro-optical systems.
Design of a dataway processor for a parallel image signal processing system
NASA Astrophysics Data System (ADS)
Nomura, Mitsuru; Fujii, Tetsuro; Ono, Sadayasu
1995-04-01
Recently, demands for high-speed signal processing have been increasing especially in the field of image data compression, computer graphics, and medical imaging. To achieve sufficient power for real-time image processing, we have been developing parallel signal-processing systems. This paper describes a communication processor called 'dataway processor' designed for a new scalable parallel signal-processing system. The processor has six high-speed communication links (Dataways), a data-packet routing controller, a RISC CORE, and a DMA controller. Each communication link operates at 8-bit parallel in a full duplex mode at 50 MHz. Moreover, data routing, DMA, and CORE operations are processed in parallel. Therefore, sufficient throughput is available for high-speed digital video signals. The processor is designed in a top- down fashion using a CAD system called 'PARTHENON.' The hardware is fabricated using 0.5-micrometers CMOS technology, and its hardware is about 200 K gates.
NASA Technical Reports Server (NTRS)
Kaiser, Mary K. (Inventor); Adelstein, Bernard D. (Inventor); Anderson, Mark R. (Inventor); Beutter, Brent R. (Inventor); Ahumada, Albert J., Jr. (Inventor); McCann, Robert S. (Inventor)
2014-01-01
A method and apparatus for reducing the visual blur of an object being viewed by an observer experiencing vibration. In various embodiments of the present invention, the visual blur is reduced through stroboscopic image modulation (SIM). A SIM device is operated in an alternating "on/off" temporal pattern according to a SIM drive signal (SDS) derived from the vibration being experienced by the observer. A SIM device (controlled by a SIM control system) operates according to the SDS serves to reduce visual blur by "freezing" (or reducing an image's motion to a slow drift) the visual image of the viewed object. In various embodiments, the SIM device is selected from the group consisting of illuminator(s), shutter(s), display control system(s), and combinations of the foregoing (including the use of multiple illuminators, shutters, and display control systems).
Radar signal pre-processing to suppress surface bounce and multipath
Paglieroni, David W; Mast, Jeffrey E; Beer, N. Reginald
2013-12-31
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes that return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.
Three-dimensional laser microvision.
Shimotahira, H; Iizuka, K; Chu, S C; Wah, C; Costen, F; Yoshikuni, Y
2001-04-10
A three-dimensional (3-D) optical imaging system offering high resolution in all three dimensions, requiring minimum manipulation and capable of real-time operation, is presented. The system derives its capabilities from use of the superstructure grating laser source in the implementation of a laser step frequency radar for depth information acquisition. A synthetic aperture radar technique was also used to further enhance its lateral resolution as well as extend the depth of focus. High-speed operation was made possible by a dual computer system consisting of a host and a remote microcomputer supported by a dual-channel Small Computer System Interface parallel data transfer system. The system is capable of operating near real time. The 3-D display of a tunneling diode, a microwave integrated circuit, and a see-through image taken by the system operating near real time are included. The depth resolution is 40 mum; lateral resolution with a synthetic aperture approach is a fraction of a micrometer and that without it is approximately 10 mum.
Jha, Abhinav K; Barrett, Harrison H; Frey, Eric C; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A
2015-09-21
Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and implemented for graphics processing units (GPUs). Further, this approach leverages another important advantage of PP systems, namely the possibility to perform photon-by-photon real-time reconstruction. We demonstrate the application of the approach to perform reconstruction in a simulated 2D SPECT system. The results help to validate and demonstrate the utility of the proposed method and show that PP systems can help overcome the aliasing artifacts that are otherwise intrinsically present in PC systems.
NASA Astrophysics Data System (ADS)
Jha, Abhinav K.; Barrett, Harrison H.; Frey, Eric C.; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A.
2015-09-01
Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and implemented for graphics processing units (GPUs). Further, this approach leverages another important advantage of PP systems, namely the possibility to perform photon-by-photon real-time reconstruction. We demonstrate the application of the approach to perform reconstruction in a simulated 2D SPECT system. The results help to validate and demonstrate the utility of the proposed method and show that PP systems can help overcome the aliasing artifacts that are otherwise intrinsically present in PC systems.
Suppression of fixed pattern noise for infrared image system
NASA Astrophysics Data System (ADS)
Park, Changhan; Han, Jungsoo; Bae, Kyung-Hoon
2008-04-01
In this paper, we propose suppression of fixed pattern noise (FPN) and compensation of soft defect for improvement of object tracking in cooled staring infrared focal plane array (IRFPA) imaging system. FPN appears an observable image which applies to non-uniformity compensation (NUC) by temperature. Soft defect appears glittering black and white point by characteristics of non-uniformity for IR detector by time. This problem is very important because it happen serious problem for object tracking as well as degradation for image quality. Signal processing architecture in cooled staring IRFPA imaging system consists of three tables: low, normal, high temperature for reference gain and offset values. Proposed method operates two offset tables for each table. This is method which operates six term of temperature on the whole. Proposed method of soft defect compensation consists of three stages: (1) separates sub-image for an image, (2) decides a motion distribution of object between each sub-image, (3) analyzes for statistical characteristic from each stationary fixed pixel. Based on experimental results, the proposed method shows an improved image which suppresses FPN by change of temperature distribution from an observational image in real-time.
Analysis of straw row in the image to control the trajectory of the agricultural combine harvester
NASA Astrophysics Data System (ADS)
Shkanaev, Aleksandr Yurievich; Polevoy, Dmitry Valerevich; Panchenko, Aleksei Vladimirovich; Krokhina, Darya Alekseevna; Nailevish, Sadekov Rinat
2018-04-01
The paper proposes a solution to the automatic operation of the combine harvester along the straw rows by means of the images from the camera, installed in the cab of the harvester. The U-Net is used to recognize straw rows in the image. The edges of the row are approximated in the segmented image by the curved lines and further converted into the harvester coordinate system for the automatic operating system. The "new" network architecture and approaches to the row approximation has improved the quality of the recognition task and the processing speed of the frames up to 96% and 7.5 fps, respectively. Keywords: Grain harvester,
Linte, Cristian A; Moore, John; Wedlake, Chris; Bainbridge, Daniel; Guiraudon, Gérard M; Jones, Douglas L; Peters, Terry M
2009-03-01
An interventional system for minimally invasive cardiac surgery was developed for therapy delivery inside the beating heart, in absence of direct vision. A system was developed to provide a virtual reality (VR) environment that integrates pre-operative imaging, real-time intra-operative guidance using 2D trans-esophageal ultrasound, and models of the surgical tools tracked using a magnetic tracking system. Detailed 3D dynamic cardiac models were synthesized from high-resolution pre-operative MR data and registered within the intra-operative imaging environment. The feature-based registration technique was employed to fuse pre- and intra-operative data during in vivo intracardiac procedures on porcine subjects. This method was found to be suitable for in vivo applications as it relies on easily identifiable landmarks, and hence, it ensures satisfactory alignment of pre- and intra-operative anatomy in the region of interest (4.8 mm RMS alignment accuracy) within the VR environment. Our initial experience in translating this work to guide intracardiac interventions, such as mitral valve implantation and atrial septal defect repair demonstrated feasibility of the methods. Surgical guidance in the absence of direct vision and with no exposure to ionizing radiation was achieved, so our virtual environment constitutes a feasible candidate for performing various off-pump intracardiac interventions.
High data volume and transfer rate techniques used at NASA's image processing facility
NASA Technical Reports Server (NTRS)
Heffner, P.; Connell, E.; Mccaleb, F.
1978-01-01
Data storage and transfer operations at a new image processing facility are described. The equipment includes high density digital magnetic tape drives and specially designed controllers to provide an interface between the tape drives and computerized image processing systems. The controller performs the functions necessary to convert the continuous serial data stream from the tape drive to a word-parallel blocked data stream which then goes to the computer-based system. With regard to the tape packing density, 1.8 times 10 to the tenth data bits are stored on a reel of one-inch tape. System components and their operation are surveyed, and studies on advanced storage techniques are summarized.
Scollato, A; Perrini, P; Benedetto, N; Di Lorenzo, N
2007-06-01
We propose an easy-to-construct digital video editing system ideal to produce video documentation and still images. A digital video editing system applicable to many video sources in the operating room is described in detail. The proposed system has proved easy to use and permits one to obtain videography quickly and easily. Mixing different streams of video input from all the devices in use in the operating room, the application of filters and effects produces a final, professional end-product. Recording on a DVD provides an inexpensive, portable and easy-to-use medium to store or re-edit or tape at a later time. From stored videography it is easy to extract high-quality, still images useful for teaching, presentations and publications. In conclusion digital videography and still photography can easily be recorded by the proposed system, producing high-quality video recording. The use of firewire ports provides good compatibility with next-generation hardware and software. The high standard of quality makes the proposed system one of the lowest priced products available today.
Recent advances in automatic alignment system for the National Ignition Facility
NASA Astrophysics Data System (ADS)
Wilhelmsen, Karl; Awwal, Abdul A. S.; Kalantar, Dan; Leach, Richard; Lowe-Webb, Roger; McGuigan, David; Miller Kamm, Vicki
2011-03-01
The automatic alignment system for the National Ignition Facility (NIF) is a large-scale parallel system that directs all 192 laser beams along the 300-m optical path to a 50-micron focus at target chamber in less than 50 minutes. The system automatically commands 9,000 stepping motors to adjust mirrors and other optics based upon images acquired from high-resolution digital cameras viewing beams at various locations. Forty-five control loops per beamline request image processing services running on a LINUX cluster to analyze these images of the beams and references, and automatically steer the beams toward the target. This paper discusses the upgrades to the NIF automatic alignment system to handle new alignment needs and evolving requirements as related to various types of experiments performed. As NIF becomes a continuously-operated system and more experiments are performed, performance monitoring is increasingly important for maintenance and commissioning work. Data, collected during operations, is analyzed for tuning of the laser and targeting maintenance work. Handling evolving alignment and maintenance needs is expected for the planned 30-year operational life of NIF.
Compact, self-contained enhanced-vision system (EVS) sensor simulator
NASA Astrophysics Data System (ADS)
Tiana, Carlo
2007-04-01
We describe the model SIM-100 PC-based simulator, for imaging sensors used, or planned for use, in Enhanced Vision System (EVS) applications. Typically housed in a small-form-factor PC, it can be easily integrated into existing out-the-window visual simulators for fixed-wing or rotorcraft, to add realistic sensor imagery to the simulator cockpit. Multiple bands of infrared (short-wave, midwave, extended-midwave and longwave) as well as active millimeter-wave RADAR systems can all be simulated in real time. Various aspects of physical and electronic image formation and processing in the sensor are accurately (and optionally) simulated, including sensor random and fixed pattern noise, dead pixels, blooming, B-C scope transformation (MMWR). The effects of various obscurants (fog, rain, etc.) on the sensor imagery are faithfully represented and can be selected by an operator remotely and in real-time. The images generated by the system are ideally suited for many applications, ranging from sensor development engineering tradeoffs (Field Of View, resolution, etc.), to pilot familiarization and operational training, and certification support. The realistic appearance of the simulated images goes well beyond that of currently deployed systems, and beyond that required by certification authorities; this level of realism will become necessary as operational experience with EVS systems grows.
Monitoring algal blooms in drinking water reservoirs using the Landsat-8 Operational Land Imager
In this study, we demonstrated that the Landsat-8 Operational Land Imager (OLI) sensor is a powerful tool that can provide periodic and system-wide information on the condition of drinking water reservoirs. The OLI is a multispectral radiometer (30 m spatial resolution) that allo...
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Hongbo; Vorobieff, Peter V.; Menicucci, David
2012-06-01
This report presents the results of experimental tests of a concept for using infrared (IR) photos to identify non-operational systems based on their glazing temperatures; operating systems have lower glazing temperatures than those in stagnation. In recent years thousands of new solar hot water (SHW) systems have been installed in some utility districts. As these numbers increase, concern is growing about the systems dependability because installation rebates are often based on the assumption that all of the SHW systems will perform flawlessly for a 20-year period. If SHW systems routinely fail prematurely, then the utilities will have overpaid for grid-energymore » reduction performance that is unrealized. Moreover, utilities are responsible for replacing energy for loads that failed SHW system were supplying. Thus, utilities are seeking data to quantify the reliability of SHW systems. The work described herein is intended to help meet this need. The details of the experiment are presented, including a description of the SHW collectors that were examined, the testbed that was used to control the system and record data, the IR camera that was employed, and the conditions in which testing was completed. The details of the associated analysis are presented, including direct examination of the video records of operational and stagnant collectors, as well as the development of a model to predict glazing temperatures and an analysis of temporal intermittency of the images, both of which are critical to properly adjusting the IR camera for optimal performance. Many IR images and a video are presented to show the contrast between operating and stagnant collectors. The major conclusion is that the technique has potential to be applied by using an aircraft fitted with an IR camera that can fly over an area with installed SHW systems, thus recording the images. Subsequent analysis of the images can determine the operational condition of the fielded collectors. Specific recommendations are presented relative to the application of the technique, including ways to mitigate and manage potential sources of error.« less
Motion effects in multistatic millimeter-wave imaging systems
NASA Astrophysics Data System (ADS)
Schiessl, Andreas; Ahmed, Sherif Sayed; Schmidt, Lorenz-Peter
2013-10-01
At airport security checkpoints, authorities are demanding improved personnel screening devices for increased security. Active mm-wave imaging systems deliver the high quality images needed for reliable automatic detection of hidden threats. As mm-wave imaging systems assume static scenarios, motion effects caused by movement of persons during the screening procedure can degrade image quality, so very short measurement time is required. Multistatic imaging array designs and fully electronic scanning in combination with digital beamforming offer short measurement time together with high resolution and high image dynamic range, which are critical parameters for imaging systems used for passenger screening. In this paper, operational principles of such systems are explained, and the performance of the imaging systems with respect to motion within the scenarios is demonstrated using mm-wave images of different test objects and standing as well as moving persons. Electronic microwave imaging systems using multistatic sparse arrays are suitable for next generation screening systems, which will support on the move screening of passengers.
The 3-D image recognition based on fuzzy neural network technology
NASA Technical Reports Server (NTRS)
Hirota, Kaoru; Yamauchi, Kenichi; Murakami, Jun; Tanaka, Kei
1993-01-01
Three dimensional stereoscopic image recognition system based on fuzzy-neural network technology was developed. The system consists of three parts; preprocessing part, feature extraction part, and matching part. Two CCD color camera image are fed to the preprocessing part, where several operations including RGB-HSV transformation are done. A multi-layer perception is used for the line detection in the feature extraction part. Then fuzzy matching technique is introduced in the matching part. The system is realized on SUN spark station and special image input hardware system. An experimental result on bottle images is also presented.
Wang, Min; Tian, Yun
2018-01-01
The Canny operator is widely used to detect edges in images. However, as the size of the image dataset increases, the edge detection performance of the Canny operator decreases and its runtime becomes excessive. To improve the runtime and edge detection performance of the Canny operator, in this paper, we propose a parallel design and implementation for an Otsu-optimized Canny operator using a MapReduce parallel programming model that runs on the Hadoop platform. The Otsu algorithm is used to optimize the Canny operator's dual threshold and improve the edge detection performance, while the MapReduce parallel programming model facilitates parallel processing for the Canny operator to solve the processing speed and communication cost problems that occur when the Canny edge detection algorithm is applied to big data. For the experiments, we constructed datasets of different scales from the Pascal VOC2012 image database. The proposed parallel Otsu-Canny edge detection algorithm performs better than other traditional edge detection algorithms. The parallel approach reduced the running time by approximately 67.2% on a Hadoop cluster architecture consisting of 5 nodes with a dataset of 60,000 images. Overall, our approach system speeds up the system by approximately 3.4 times when processing large-scale datasets, which demonstrates the obvious superiority of our method. The proposed algorithm in this study demonstrates both better edge detection performance and improved time performance. PMID:29861711
Chen, Yinran; Tong, Ling; Ortega, Alejandra; Luo, Jianwen; D'hooge, Jan
2017-04-01
Today's 3-D cardiac ultrasound imaging systems suffer from relatively low spatial and temporal resolution, limiting their applicability in daily clinical practice. To address this problem, 3-D diverging wave imaging with spatial coherent compounding (DWC) as well as 3-D multiline-transmit (MLT) imaging have recently been proposed. Currently, the former improves the temporal resolution significantly at the expense of image quality and the risk of introducing motion artifacts, whereas the latter only provides a moderate gain in volume rate but mostly preserves quality. In this paper, a new technique for real-time volumetric cardiac imaging is proposed by combining the strengths of both approaches. Hereto, multiple planar (i.e., 2-D) diverging waves are simultaneously transmitted in order to scan the 3-D volume, i.e., multiplane transmit (MPT) beamforming. The performance of a 3MPT imaging system was contrasted to that of a 3-D DWC system and that of a 3-D MLT system by computer simulations during both static and moving conditions of the target structures while operating at similar volume rate. It was demonstrated that for stationary targets, the 3MPT imaging system was competitive with both the 3-D DWC and 3-D MLT systems in terms of spatial resolution and sidelobe levels (i.e., image quality). However, for moving targets, the image quality quickly deteriorated for the 3-D DWC systems while it remained stable for the 3MPT system while operating at twice the volume rate of the 3-D-MLT system. The proposed MPT beamforming approach was thus demonstrated to be feasible and competitive to state-of-the-art methodologies.
Sato, Katsushige; Nariai, Tadashi; Momose-Sato, Yoko; Kamino, Kohtaro
2017-07-01
Intrinsic optical imaging as developed by Grinvald et al. is a powerful technique for monitoring neural function in the in vivo central nervous system. The advent of this dye-free imaging has also enabled us to monitor human brain function during neurosurgical operations. We briefly describe our own experience in functional mapping of the human somatosensory cortex, carried out using intraoperative optical imaging. The maps obtained demonstrate new additional evidence of a hierarchy for sensory response patterns in the human primary somatosensory cortex.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerlin, B.D.; Cerva, J.R.; Glenn, M.E.
This document describes evaluation studies and technical investigations proposed for the three-year Digital Imaging Network System (DINS) prototype project, sponsored by the U.S. Army Medical Research and Development Command, Ft. Detrick, Maryland. The project has three overall goals. The first is to install and operate a prototype DINS at each of two University-based hospitals for test purposes. The second is to evaluate key aspects of each prototype system once it is in full operation. The third is to develop guidelines and specifications for an operational DINS suitable for use by the military and others developing systems of the future. Thismore » document defines twelve overall evaluative questions for use in meeting the second and third objectives of the project and proposes studies that will answer these questions.« less
Detection of contraband concealed on the body using x-ray imaging
NASA Astrophysics Data System (ADS)
Smith, Gerald J.
1997-01-01
In an effort to avoid detection, smugglers and terrorists are increasingly using the body as a vehicle for transporting illicit drugs, weapons, and explosives. This trend illustrates the natural tendency of traffickers to seek the path of least resistance, as improved interdiction technology and operational effectiveness have been brought to bear on other trafficking avenues such as luggage, cargo, and parcels. In response, improved technology for human inspection is being developed using a variety of techniques. ASE's BodySearch X-ray Inspection Systems uses backscatter x-ray imaging of the human body to quickly, safely, and effectively screen for drugs, weapons, and explosives concealed on the body. This paper reviews the law enforcement and social issues involved in human inspections, and briefly describes the ASE BodySearch systems. Operator training, x-ray image interpretation, and maximizing systems effectiveness are also discussed. Finally, data collected from operation of the BodySearch system in the field is presented, and new law enforcement initiatives which have come about due to recent events are reviewed.
NASA Astrophysics Data System (ADS)
Kielkopf, John F.; Carter, B.; Brown, C.; Hart, R.; Hay, J.; Waite, I.
2007-12-01
The Digital Science Partnership, a collaboration of the University of Louisville and the University of Southern Queensland, operates a pair of 0.5-meter telescopes for teaching, research, and informal education. The instruments were installed at sites near Toowoomba, Australia, and Louisville, Kentucky in 2006. The Planewave Instruments optical systems employ a unique Dall-Kirkham design incorporating a two-element corrector that demagnifies the image, flattens the focal plane, and reduces coma. These instruments have a moderately fast f/6.8 focal ratio and maintain image quality with little vignetting over a field 42 mm in diameter (0.7 degree). With a 9-micron pixel CCD such as the KAF-6303E, the image scale of 0.55 seconds of arc per pixel typically yields seeing-limited image quality at our sites. The telescopes and their enclosure are operated in a live remote observing mode through Linux-based software, including a dome-control system that uses RFID tags for absolute rotation encoding. After several months of testing and development we have examples of images and photometry from both sites that illustrate the performance of the system. We will discuss image quality, as well as practical matters such as pointing accuracy and field acquisition, auto-guiding, communication latency in large file transfer, and our experience with remote observing assisted by teleconferencing. Time-delay-integration (TDI) imaging, in which the telescope is stationary while the CCD is clocked to track in right ascension, is under study. The technique offers wide fields of view with very high signal-to-noise ratio, and can be implemented in robotically operated instruments used in monitoring, rapid-response, and educational programs. Results for conventional and TDI imaging from the dark site in Australia compared to the brighter suburban site in Kentucky show the benefits of access to dark sites through international partnerships that remote operation technology offers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edelen, A. L.; Biedron, S. G.; Milton, S. V.
At present, a variety of image-based diagnostics are used in particle accelerator systems. Often times, these are viewed by a human operator who then makes appropriate adjustments to the machine. Given recent advances in using convolutional neural networks (CNNs) for image processing, it should be possible to use image diagnostics directly in control routines (NN-based or otherwise). This is especially appealing for non-intercepting diagnostics that could run continuously during beam operation. Here, we show results of a first step toward implementing such a controller: our trained CNN can predict multiple simulated downstream beam parameters at the Fermilab Accelerator Science andmore » Technology (FAST) facility's low energy beamline using simulated virtual cathode laser images, gun phases, and solenoid strengths.« less
Imager for Mars Pathfinder (IMP) image calibration
Reid, R.J.; Smith, P.H.; Lemmon, M.; Tanner, R.; Burkland, M.; Wegryn, E.; Weinberg, J.; Marcialis, R.; Britt, D.T.; Thomas, N.; Kramm, R.; Dummel, A.; Crowe, D.; Bos, B.J.; Bell, J.F.; Rueffer, P.; Gliem, F.; Johnson, J. R.; Maki, J.N.; Herkenhoff, K. E.; Singer, Robert B.
1999-01-01
The Imager for Mars Pathfinder returned over 16,000 high-quality images from the surface of Mars. The camera was well-calibrated in the laboratory, with <5% radiometric uncertainty. The photometric properties of two radiometric targets were also measured with 3% uncertainty. Several data sets acquired during the cruise and on Mars confirm that the system operated nominally throughout the course of the mission. Image calibration algorithms were developed for landed operations to correct instrumental sources of noise and to calibrate images relative to observations of the radiometric targets. The uncertainties associated with these algorithms as well as current improvements to image calibration are discussed. Copyright 1999 by the American Geophysical Union.
Use of a multimission system for cost effective support of planetary science data processing
NASA Technical Reports Server (NTRS)
Green, William B.
1994-01-01
JPL's Multimission Operations Systems Office (MOSO) provides a multimission facility at JPL for processing science instrument data from NASA's planetary missions. This facility, the Multimission Image Processing System (MIPS), is developed and maintained by MOSO to meet requirements that span the NASA family of planetary missions. Although the word 'image' appears in the title, MIPS is used to process instrument data from a variety of science instruments. This paper describes the design of a new system architecture now being implemented within the MIPS to support future planetary mission activities at significantly reduced operations and maintenance cost.
Toshiba TDF-500 High Resolution Viewing And Analysis System
NASA Astrophysics Data System (ADS)
Roberts, Barry; Kakegawa, M.; Nishikawa, M.; Oikawa, D.
1988-06-01
A high resolution, operator interactive, medical viewing and analysis system has been developed by Toshiba and Bio-Imaging Research. This system provides many advanced features including high resolution displays, a very large image memory and advanced image processing capability. In particular, the system provides CRT frame buffers capable of update in one frame period, an array processor capable of image processing at operator interactive speeds, and a memory system capable of updating multiple frame buffers at frame rates whilst supporting multiple array processors. The display system provides 1024 x 1536 display resolution at 40Hz frame and 80Hz field rates. In particular, the ability to provide whole or partial update of the screen at the scanning rate is a key feature. This allows multiple viewports or windows in the display buffer with both fixed and cine capability. To support image processing features such as windowing, pan, zoom, minification, filtering, ROI analysis, multiplanar and 3D reconstruction, a high performance CPU is integrated into the system. This CPU is an array processor capable of up to 400 million instructions per second. To support the multiple viewer and array processors' instantaneous high memory bandwidth requirement, an ultra fast memory system is used. This memory system has a bandwidth capability of 400MB/sec and a total capacity of 256MB. This bandwidth is more than adequate to support several high resolution CRT's and also the fast processing unit. This fully integrated approach allows effective real time image processing. The integrated design of viewing system, memory system and array processor are key to the imaging system. It is the intention to describe the architecture of the image system in this paper.
Infrared thermographic diagnostic aid to aircraft maintenance
NASA Astrophysics Data System (ADS)
Delo, Michael; Delo, Steve
2007-04-01
Thermographic data can be used as a supplement to aircraft maintenance operations in both back shop and flight line situations. Aircraft systems such as electrical, propulsion, environmental, pitot static and hydraulic/pneumatic fluid, can be inspected using a thermal infrared (IR) imager. Aircraft systems utilize electro-hydraulic, electro-mechanical, and electro-pneumatic mechanisms, which, if accessible, can be diagnosed for faults using infrared technology. Since thermographs are images of heat, rather than light, the measurement principle is based on the fact that any physical object (radiating energy at infrared wavelengths within the IR portion of the electro-magnetic spectrum), can be imaged with infrared imaging equipment. All aircraft systems being tested with infrared are required to be energized for troubleshooting, so that valuable baseline data from fully operational aircraft can be collected, archived and referenced for future comparisons.
Medical image informatics infrastructure design and applications.
Huang, H K; Wong, S T; Pietka, E
1997-01-01
Picture archiving and communication systems (PACS) is a system integration of multimodality images and health information systems designed for improving the operation of a radiology department. As it evolves, PACS becomes a hospital image document management system with a voluminous image and related data file repository. A medical image informatics infrastructure can be designed to take advantage of existing data, providing PACS with add-on value for health care service, research, and education. A medical image informatics infrastructure (MIII) consists of the following components: medical images and associated data (including PACS database), image processing, data/knowledge base management, visualization, graphic user interface, communication networking, and application oriented software. This paper describes these components and their logical connection, and illustrates some applications based on the concept of the MIII.
Introduction to computer image processing
NASA Technical Reports Server (NTRS)
Moik, J. G.
1973-01-01
Theoretical backgrounds and digital techniques for a class of image processing problems are presented. Image formation in the context of linear system theory, image evaluation, noise characteristics, mathematical operations on image and their implementation are discussed. Various techniques for image restoration and image enhancement are presented. Methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.
CHARGE Image Generator: Theory of Operation and Author Language Support. Technical Report 75-3.
ERIC Educational Resources Information Center
Gunwaldsen, Roger L.
The image generator function and author language software support for the CHARGE (Color Halftone Area Graphics Environment) Interactive Graphics System are described. Designed initially for use in computer-assisted instruction (CAI) systems, the CHARGE Interactive Graphics System can provide graphic displays for various applications including…
Intra-operative 3D imaging system for robot-assisted fracture manipulation.
Dagnino, G; Georgilas, I; Tarassoli, P; Atkins, R; Dogramadzi, S
2015-01-01
Reduction is a crucial step in the treatment of broken bones. Achieving precise anatomical alignment of bone fragments is essential for a good fast healing process. Percutaneous techniques are associated with faster recovery time and lower infection risk. However, deducing intra-operatively the desired reduction position is quite challenging due to the currently available technology. The 2D nature of this technology (i.e. the image intensifier) doesn't provide enough information to the surgeon regarding the fracture alignment and rotation, which is actually a three-dimensional problem. This paper describes the design and development of a 3D imaging system for the intra-operative virtual reduction of joint fractures. The proposed imaging system is able to receive and segment CT scan data of the fracture, to generate the 3D models of the bone fragments, and display them on a GUI. A commercial optical tracker was included into the system to track the actual pose of the bone fragments in the physical space, and generate the corresponding pose relations in the virtual environment of the imaging system. The surgeon virtually reduces the fracture in the 3D virtual environment, and a robotic manipulator connected to the fracture through an orthopedic pin executes the physical reductions accordingly. The system is here evaluated through fracture reduction experiments, demonstrating a reduction accuracy of 1.04 ± 0.69 mm (translational RMSE) and 0.89 ± 0.71 ° (rotational RMSE).
Comprehensive approach to image-guided surgery
NASA Astrophysics Data System (ADS)
Peters, Terence M.; Comeau, Roch M.; Kasrai, Reza; St. Jean, Philippe; Clonda, Diego; Sinasac, M.; Audette, Michel A.; Fenster, Aaron
1998-06-01
Image-guided surgery has evolved over the past 15 years from stereotactic planning, where the surgeon planned approaches to intracranial targets on the basis of 2D images presented on a simple workstation, to the use of sophisticated multi- modality 3D image integration in the operating room, with guidance being provided by mechanically, optically or electro-magnetically tracked probes or microscopes. In addition, sophisticated procedures such as thalamotomies and pallidotomies to relieve the symptoms of Parkinson's disease, are performed with the aid of volumetric atlases integrated with the 3D image data. Operations that are performed stereotactically, that is to say via a small burr- hole in the skull, are able to assume that the information contained in the pre-operative imaging study, accurately represents the brain morphology during the surgical procedure. On the other hand, preforming a procedure via an open craniotomy presents a problem. Not only does tissue shift when the operation begins, even the act of opening the skull can cause significant shift of the brain tissue due to the relief of intra-cranial pressure, or the effect of drugs. Means of tracking and correcting such shifts from an important part of the work in the field of image-guided surgery today. One approach has ben through the development of intra-operative MRI imaging systems. We describe an alternative approach which integrates intra-operative ultrasound with pre-operative MRI to track such changes in tissue morphology.
Fluorescence Imaging Topography Scanning System for intraoperative multimodal imaging
Quang, Tri T.; Kim, Hye-Yeong; Bao, Forrest Sheng; Papay, Francis A.; Edwards, W. Barry; Liu, Yang
2017-01-01
Fluorescence imaging is a powerful technique with diverse applications in intraoperative settings. Visualization of three dimensional (3D) structures and depth assessment of lesions, however, are oftentimes limited in planar fluorescence imaging systems. In this study, a novel Fluorescence Imaging Topography Scanning (FITS) system has been developed, which offers color reflectance imaging, fluorescence imaging and surface topography scanning capabilities. The system is compact and portable, and thus suitable for deployment in the operating room without disturbing the surgical flow. For system performance, parameters including near infrared fluorescence detection limit, contrast transfer functions and topography depth resolution were characterized. The developed system was tested in chicken tissues ex vivo with simulated tumors for intraoperative imaging. We subsequently conducted in vivo multimodal imaging of sentinel lymph nodes in mice using FITS and PET/CT. The PET/CT/optical multimodal images were co-registered and conveniently presented to users to guide surgeries. Our results show that the developed system can facilitate multimodal intraoperative imaging. PMID:28437441
Imaging efficacy of a targeted imaging agent for fluorescence endoscopy
NASA Astrophysics Data System (ADS)
Healey, A. J.; Bendiksen, R.; Attramadal, T.; Bjerke, R.; Waagene, S.; Hvoslef, A. M.; Johannesen, E.
2008-02-01
Colorectal cancer is a major cause of cancer death. A significant unmet clinical need exists in the area of screening for earlier and more accurate diagnosis and treatment. We have identified a fluorescence imaging agent targeted to an early stage molecular marker for colorectal cancer. The agent is administered intravenously and imaged in a far red imaging channel as an adjunct to white light endoscopy. There is experimental evidence of preclinical proof of mechanism for the agent. In order to assess potential clinical efficacy, imaging was performed with a prototype fluorescence endoscope system designed to produce clinically relevant images. A clinical laparoscope system was modified for fluorescence imaging. The system was optimised for sensitivity. Images were recorded at settings matching those expected with a clinical endoscope implementation (at video frame rate operation). The animal model was comprised of a HCT-15 xenograft tumour expressing the target at concentration levels expected in early stage colorectal cancer. Tumours were grown subcutaneously. The imaging agent was administered intravenously at a dose of 50nmol/kg body weight. The animals were killed 2 hours post administration and prepared for imaging. A 3-4mm diameter, 1.6mm thick slice of viable tumour was placed over the opened colon and imaged with the laparoscope system. A receiver operator characteristic analysis was applied to imaging results. An area under the curve of 0.98 and a sensitivity of 87% [73, 96] and specificity of 100% [93, 100] were obtained.
Operator-coached machine vision for space telerobotics
NASA Technical Reports Server (NTRS)
Bon, Bruce; Wilcox, Brian; Litwin, Todd; Gennery, Donald B.
1991-01-01
A prototype system for interactive object modeling has been developed and tested. The goal of this effort has been to create a system which would demonstrate the feasibility of high interactive operator-coached machine vision in a realistic task environment, and to provide a testbed for experimentation with various modes of operator interaction. The purpose for such a system is to use human perception where machine vision is difficult, i.e., to segment the scene into objects and to designate their features, and to use machine vision to overcome limitations of human perception, i.e., for accurate measurement of object geometry. The system captures and displays video images from a number of cameras, allows the operator to designate a polyhedral object one edge at a time by moving a 3-D cursor within these images, performs a least-squares fit of the designated edges to edge data detected with a modified Sobel operator, and combines the edges thus detected to form a wire-frame object model that matches the Sobel data.
NASA Technical Reports Server (NTRS)
Salomonson, Vincent V.
1999-01-01
In the near term NASA is entering into the peak activity period of the Earth Observing System (EOS). The EOS AM-1 /"Terra" spacecraft is nearing launch and operation to be followed soon by the New Millennium Program (NMP) Earth Observing (EO-1) mission. Other missions related to land imaging and studies include EOS PM-1 mission, the Earth System Sciences Program (ESSP) Vegetation Canopy Lidar (VCL) mission, the EOS/IceSat mission. These missions involve clear advances in technologies and observational capability including improvements in multispectral imaging and other observing strategies, for example, "formation flying". Plans are underway to define the next era of EOS missions, commonly called "EOS Follow-on" or EOS II. The programmatic planning includes concepts that represent advances over the present Landsat-7 mission that concomitantly recognize the advances being made in land imaging within the private sector. The National Polar Orbiting Environmental Satellite Series (NPOESS) Preparatory Project (NPP) is an effort that will help to transition EOS medium resolution (herein meaning spatial resolutions near 500 meters), multispectral measurement capabilities such as represented by the EOS Moderate Resolution Imaging Spectroradiometer (MODIS) into the NPOESS operational series of satellites. Developments in Synthetic Aperture Radar (SAR) and passive microwave land observing capabilities are also proceeding. Beyond these efforts the Earth Science Enterprise Technology Strategy is embarking efforts to advance technologies in several basic areas: instruments, flight systems and operational capability, and information systems. In the case of instruments architectures will be examined that offer significant reductions in mass, volume, power and observational flexibility. For flight systems and operational capability, formation flying including calibration and data fusion, systems operation autonomy, and mechanical and electronic innovations that can reduce spacecraft and subsystem resource requirements. The efforts in information systems will include better approaches for linking multiple data sets, extracting and visualizing information, and improvements in collecting, compressing, transmitting, processing, distributing and archiving data from multiple platforms. Overall concepts such as sensor webs, constellations of observing systems, and rapid and tailored data availability and delivery to multiple users comprise and notions Earth Science Vision for the future.
Man-machine interactive imaging and data processing using high-speed digital mass storage
NASA Technical Reports Server (NTRS)
Alsberg, H.; Nathan, R.
1975-01-01
The role of vision in teleoperation has been recognized as an important element in the man-machine control loop. In most applications of remote manipulation, direct vision cannot be used. To overcome this handicap, the human operator's control capabilities are augmented by a television system. This medium provides a practical and useful link between workspace and the control station from which the operator perform his tasks. Human performance deteriorates when the images are degraded as a result of instrumental and transmission limitations. Image enhancement is used to bring out selected qualities in a picture to increase the perception of the observer. A general purpose digital computer, an extensive special purpose software system is used to perform an almost unlimited repertoire of processing operations.
EOS image data processing system definition study
NASA Technical Reports Server (NTRS)
Gilbert, J.; Honikman, T.; Mcmahon, E.; Miller, E.; Pietrzak, L.; Yorsz, W.
1973-01-01
The Image Processing System (IPS) requirements and configuration are defined for NASA-sponsored advanced technology Earth Observatory System (EOS). The scope included investigation and definition of IPS operational, functional, and product requirements considering overall system constraints and interfaces (sensor, etc.) The scope also included investigation of the technical feasibility and definition of a point design reflecting system requirements. The design phase required a survey of present and projected technology related to general and special-purpose processors, high-density digital tape recorders, and image recorders.
Hirakawa, Takeshi; Matsunaga, Sachihiro
2016-01-01
In plants, chromatin dynamics spatiotemporally change in response to various environmental stimuli. However, little is known about chromatin dynamics in the nuclei of plants. Here, we introduce a three-dimensional, live-cell imaging method that can monitor chromatin dynamics in nuclei via a chromatin tagging system that can visualize specific genomic loci in living plant cells. The chromatin tagging system is based on a bacterial operator/repressor system in which the repressor is fused to fluorescent proteins. A recent refinement of promoters for the system solved the problem of gene silencing and abnormal pairing frequencies between operators. Using this system, we can detect the spatiotemporal dynamics of two homologous loci as two fluorescent signals within a nucleus and monitor the distance between homologous loci. These live-cell imaging methods will provide new insights into genome organization, development processes, and subnuclear responses to environmental stimuli in plants.
NASA Astrophysics Data System (ADS)
Gomer, Nathaniel R.; Gardner, Charles W.
2014-05-01
In order to combat the threat of emplaced explosives (land mines, etc.), ChemImage Sensor Systems (CISS) has developed a multi-sensor, robot mounted sensor capable of identification and confirmation of potential threats. The system, known as STARR (Shortwave-infrared Targeted Agile Raman Robot), utilizes shortwave infrared spectroscopy for the identification of potential threats, combined with a visible short-range standoff Raman hyperspectral imaging (HSI) system for material confirmation. The entire system is mounted onto a Talon UGV (Unmanned Ground Vehicle), giving the sensor an increased area search rate and reducing the risk of injury to the operator. The Raman HSI system utilizes a fiber array spectral translator (FAST) for the acquisition of high quality Raman chemical images, allowing for increased sensitivity and improved specificity. An overview of the design and operation of the system will be presented, along with initial detection results of the fusion sensor.
Synchromodal optical in vivo imaging employing microlens array optics: a complete framework
NASA Astrophysics Data System (ADS)
Peter, Joerg
2013-03-01
A complete mathematical framework for preclinical optical imaging (OI) support comprising bioluminescence imaging (BLI), fluorescence surface imaging (FSI) and fluorescence optical tomography (FOT) is presented in which optical data is acquired by means of a microlens array (MLA) based light detector (MLA-D). The MLA-D has been developed to enable unique OI, especially in synchromodal operation with secondary imaging modalities (SIM) such as positron emission tomography (PET) or magnetic resonance imaging (MRI). An MLA-D consists of a (large-area) photon sensor array, a matched MLA for field-of-view definition, and a septum mask of specific geometry made of anodized aluminum that is positioned between the sensor and the MLA to suppresses light cross-talk and to shield the sensor's radiofrequency interference signal (essential when used inside an MRI system). The software framework, while freely parameterizable for any MLA-D, is tailored towards an OI prototype system for preclinical SIM application comprising a multitude of cylindrically assembled, gantry-mounted, simultaneously operating MLA-D's. Besides the MLA-D specificity, the framework incorporates excitation and illumination light-source declarations of large-field and point geometry to facilitate multispectral FSI and FOT as well as three-dimensional object recognition. When used in synchromodal operation, reconstructed tomographic SIM volume data can be used for co-modal image fusion and also as a prior for estimating the imaged object's 3D surface by means of gradient vector flow. Superimposed planar (without object prior) or surface-aligned inverse mapping can be performed to estimate and to fuse the emission light map with the boundary of the imaged object. Triangulation and subsequent optical reconstruction (FOT) or constrained flow estimation (BLI), both including the possibility of SIM priors, can be performed to estimate the internal three-dimensional emission light distribution. The framework is susceptible to a number of variables controlling convergence and computational speed. Utilization and performance is illustrated on experimentally acquired data employing the OI prototype system in stand-alone operation, and when integrated into an unmodified preclinical PET system performing synchromodal BLI-PET in vivo imaging.
Performance and cost improvements in the display control module for DVE
NASA Astrophysics Data System (ADS)
Thomas, J.; Lorimer, S.,
2009-05-01
The DCM for the Drivers Vision Enhancer system is the display part of a relatively low cost IR imaging system for land-vehicles. Recent changes to operational needs, associated with asymmetrical warfare have added daytime operations to the uses for this mature system. This paper will discuss cost/performance tradeoffs and provide thoughts for "DVE of the future" in light of these new operational needs for the system.
In vivo imaging of the rodent eye with swept source/Fourier domain OCT
Liu, Jonathan J.; Grulkowski, Ireneusz; Kraus, Martin F.; Potsaid, Benjamin; Lu, Chen D.; Baumann, Bernhard; Duker, Jay S.; Hornegger, Joachim; Fujimoto, James G.
2013-01-01
Swept source/Fourier domain OCT is demonstrated for in vivo imaging of the rodent eye. Using commercial swept laser technology, we developed a prototype OCT imaging system for small animal ocular imaging operating in the 1050 nm wavelength range at an axial scan rate of 100 kHz with ~6 µm axial resolution. The high imaging speed enables volumetric imaging with high axial scan densities, measuring high flow velocities in vessels, and repeated volumetric imaging over time. The 1050 nm wavelength light provides increased penetration into tissue compared to standard commercial OCT systems at 850 nm. The long imaging range enables multiple operating modes for imaging the retina, posterior eye, as well as anterior eye and full eye length. A registration algorithm using orthogonally scanned OCT volumetric data sets which can correct motion on a per A-scan basis is applied to compensate motion and merge motion corrected volumetric data for enhanced OCT image quality. Ultrahigh speed swept source OCT is a promising technique for imaging the rodent eye, proving comprehensive information on the cornea, anterior segment, lens, vitreous, posterior segment, retina and choroid. PMID:23412778
The Development of a Flexible Measuring System for Muscle Volume Using Ultrasonography
NASA Astrophysics Data System (ADS)
Fukumoto, Kiyotaka; Fukuda, Osamu; Tsubai, Masayoshi; Muraki, Satoshi
Quantification of muscle volume can be used as a means for the estimation of muscle strength. Its measuring process does not need the subject's muscular contractions so it is completely safe and particularly suited for elderly people. Therefore, we have developed a flexible measuring system for muscle volume using ultrasonography. In this system, an ultrasound probe is installed on a link mechanism which continuously scans fragmental images along the human body surface. These images are then measured and composed into a wide area cross-sectional image based on the spatial compounding method. The flexibility of the link mechanism enables the operator to measure the images under any body postures and body site. The spatial compounding method significantly reduces speckle and artifact noises from the composed cross-sectional image so that the operator can observe the individual muscles, such as Rectus femoris, Vastus intermedius, and so on, in detail. We conducted the experiments in order to examine the advantages of this system we have developed. The experimental results showed a high accuracy of the measuring position which was calculated using the link mechanism and presented the noise reduction effect based on the spatial compounding method. Finally, we confirmed high correlations between the MRI images and the ones of the developed system to verify the validity of the system.
NASA Astrophysics Data System (ADS)
Lee, Junghyun; Kim, Heewon; Chung, Hyun; Kim, Haedong; Choi, Sujin; Jung, Okchul; Chung, Daewon; Ko, Kwanghee
2018-04-01
In this paper, we propose a method that uses a genetic algorithm for the dynamic schedule optimization of imaging missions for multiple satellites and ground systems. In particular, the visibility conflicts of communication and mission operation using satellite resources (electric power and onboard memory) are integrated in sequence. Resource consumption and restoration are considered in the optimization process. Image acquisition is an essential part of satellite missions and is performed via a series of subtasks such as command uplink, image capturing, image storing, and image downlink. An objective function for optimization is designed to maximize the usability by considering the following components: user-assigned priority, resource consumption, and image-acquisition time. For the simulation, a series of hypothetical imaging missions are allocated to a multi-satellite control system comprising five satellites and three ground stations having S- and X-band antennas. To demonstrate the performance of the proposed method, simulations are performed via three operation modes: general, commercial, and tactical.
A New Color Image Encryption Scheme Using CML and a Fractional-Order Chaotic System
Wu, Xiangjun; Li, Yang; Kurths, Jürgen
2015-01-01
The chaos-based image cryptosystems have been widely investigated in recent years to provide real-time encryption and transmission. In this paper, a novel color image encryption algorithm by using coupled-map lattices (CML) and a fractional-order chaotic system is proposed to enhance the security and robustness of the encryption algorithms with a permutation-diffusion structure. To make the encryption procedure more confusing and complex, an image division-shuffling process is put forward, where the plain-image is first divided into four sub-images, and then the position of the pixels in the whole image is shuffled. In order to generate initial conditions and parameters of two chaotic systems, a 280-bit long external secret key is employed. The key space analysis, various statistical analysis, information entropy analysis, differential analysis and key sensitivity analysis are introduced to test the security of the new image encryption algorithm. The cryptosystem speed is analyzed and tested as well. Experimental results confirm that, in comparison to other image encryption schemes, the new algorithm has higher security and is fast for practical image encryption. Moreover, an extensive tolerance analysis of some common image processing operations such as noise adding, cropping, JPEG compression, rotation, brightening and darkening, has been performed on the proposed image encryption technique. Corresponding results reveal that the proposed image encryption method has good robustness against some image processing operations and geometric attacks. PMID:25826602
NASA Astrophysics Data System (ADS)
Zheng, Guoyan
2007-03-01
Surgical navigation systems visualize the positions and orientations of surgical instruments and implants as graphical overlays onto a medical image of the operated anatomy on a computer monitor. The orthopaedic surgical navigation systems could be categorized according to the image modalities that are used for the visualization of surgical action. In the so-called CT-based systems or 'surgeon-defined anatomy' based systems, where a 3D volume or surface representation of the operated anatomy could be constructed from the preoperatively acquired tomographic data or through intraoperatively digitized anatomy landmarks, a photorealistic rendering of the surgical action has been identified to greatly improve usability of these navigation systems. However, this may not hold true when the virtual representation of surgical instruments and implants is superimposed onto 2D projection images in a fluoroscopy-based navigation system due to the so-called image occlusion problem. Image occlusion occurs when the field of view of the fluoroscopic image is occupied by the virtual representation of surgical implants or instruments. In these situations, the surgeon may miss part of the image details, even if transparency and/or wire-frame rendering is used. In this paper, we propose to use non-photorealistic rendering to overcome this difficulty. Laboratory testing results on foamed plastic bones during various computer-assisted fluoroscopybased surgical procedures including total hip arthroplasty and long bone fracture reduction and osteosynthesis are shown.
ERIC Educational Resources Information Center
Gopal, Venkatesh; Klosowiak, Julian L.; Jaeger, Robert; Selimkhanov, Timur; Hartmann, Mitra J. Z.
2008-01-01
We describe the construction and operation of three low-cost schlieren imaging systems that can be fabricated using surplus optics and 80/20, an aluminium extrusion based construction system. Each system has a different optical configuration. The low cost and ease of construction makes these systems highly suitable for high-school and…
Novel application of simultaneous multi-image display during complex robotic abdominal procedures
2014-01-01
Background The surgical robot offers the potential to integrate multiple views into the surgical console screen, and for the assistant’s monitors to provide real-time views of both fields of operation. This function has the potential to increase patient safety and surgical efficiency during an operation. Herein, we present a novel application of the multi-image display system for simultaneous visualization of endoscopic views during various complex robotic gastrointestinal operations. All operations were performed using the da Vinci Surgical System (Intuitive Surgical, Sunnyvale, CA, USA) with the assistance of Tilepro, multi-input display software, during employment of the intraoperative scopes. Three robotic operations, left hepatectomy with intraoperative common bile duct exploration, low anterior resection, and radical distal subtotal gastrectomy with intracorporeal gastrojejunostomy, were performed by three different surgeons at a tertiary academic medical center. Results The three complex robotic abdominal operations were successfully completed without difficulty or intraoperative complications. The use of the Tilepro to simultaneously visualize the images from the colonoscope, gastroscope, and choledochoscope made it possible to perform additional intraoperative endoscopic procedures without extra monitors or interference with the operations. Conclusion We present a novel use of the multi-input display program on the da Vinci Surgical System to facilitate the performance of intraoperative endoscopies during complex robotic operations. Our study offers another potentially beneficial application of the robotic surgery platform toward integration and simplification of combining additional procedures with complex minimally invasive operations. PMID:24628761
Frangioni, John V [Wayland, MA
2012-07-24
A medical imaging system provides simultaneous rendering of visible light and fluorescent images. The system may employ dyes in a small-molecule form that remains in a subject's blood stream for several minutes, allowing real-time imaging of the subject's circulatory system superimposed upon a conventional, visible light image of the subject. The system may also employ dyes or other fluorescent substances associated with antibodies, antibody fragments, or ligands that accumulate within a region of diagnostic significance. In one embodiment, the system provides an excitation light source to excite the fluorescent substance and a visible light source for general illumination within the same optical guide that is used to capture images. In another embodiment, the system is configured for use in open surgical procedures by providing an operating area that is closed to ambient light. More broadly, the systems described herein may be used in imaging applications where a visible light image may be usefully supplemented by an image formed from fluorescent emissions from a fluorescent substance that marks areas of functional interest.
Advanced millimeter wave imaging systems
NASA Technical Reports Server (NTRS)
Schuchardt, J. M.; Gagliano, J. A.; Stratigos, J. A.; Webb, L. L.; Newton, J. M.
1980-01-01
Unique techniques are being utilized to develop self-contained imaging radiometers operating at single and multiple frequencies near 35, 95 and 183 GHz. These techniques include medium to large antennas for high spatial resolution, lowloss open structures for RF confinemnt and calibration, wide bandwidths for good sensitivity plus total automation of the unit operation and data collection. Applications include: detection of severe storms, imaging of motor vehicles, and the remote sensing of changes in material properties.
Wood, T J; Moore, C S; Stephens, A; Saunderson, J R; Beavis, A W
2015-09-01
Given the increasing use of computed tomography (CT) in the UK over the last 30 years, it is essential to ensure that all imaging protocols are optimised to keep radiation doses as low as reasonably practicable, consistent with the intended clinical task. However, the complexity of modern CT equipment can make this task difficult to achieve in practice. Recent results of local patient dose audits have shown discrepancies between two Philips CT scanners that use the DoseRight 2.0 automatic exposure control (AEC) system in the 'automatic' mode of operation. The use of this system can result in drifting dose and image quality performance over time as it is designed to evolve based on operator technique. The purpose of this study was to develop a practical technique for configuring examination protocols on four CT scanners that use the DoseRight 2.0 AEC system in the 'manual' mode of operation. This method used a uniform phantom to generate reference images which form the basis for how the AEC system calculates exposure factors for any given patient. The results of this study have demonstrated excellent agreement in the configuration of the CT scanners in terms of average patient dose and image quality when using this technique. This work highlights the importance of CT protocol harmonisation in a modern Radiology department to ensure both consistent image quality and radiation dose. Following this study, the average radiation dose for a range of CT examinations has been reduced without any negative impact on clinical image quality.
Automated imaging system for single molecules
Schwartz, David Charles; Runnheim, Rodney; Forrest, Daniel
2012-09-18
There is provided a high throughput automated single molecule image collection and processing system that requires minimal initial user input. The unique features embodied in the present disclosure allow automated collection and initial processing of optical images of single molecules and their assemblies. Correct focus may be automatically maintained while images are collected. Uneven illumination in fluorescence microscopy is accounted for, and an overall robust imaging operation is provided yielding individual images prepared for further processing in external systems. Embodiments described herein are useful in studies of any macromolecules such as DNA, RNA, peptides and proteins. The automated image collection and processing system and method of same may be implemented and deployed over a computer network, and may be ergonomically optimized to facilitate user interaction.
Lock-in imaging with synchronous digital mirror demodulation
NASA Astrophysics Data System (ADS)
Bush, Michael G.
2010-04-01
Lock-in imaging enables high contrast imaging in adverse conditions by exploiting a modulated light source and homodyne detection. We report results on a patent pending lock-in imaging system fabricated from commercial-off-theshelf parts utilizing standard cameras and a spatial light modulator. By leveraging the capabilities of standard parts we are able to present a low cost, high resolution, high sensitivity camera with applications in search and rescue, friend or foe identification (IFF), and covert surveillance. Different operating modes allow the same instrument to be utilized for dual band multispectral imaging or high dynamic range imaging, increasing the flexibility in different operational settings.
Two-dimensional vacuum ultraviolet images in different MHD events on the EAST tokamak
NASA Astrophysics Data System (ADS)
Zhijun, WANG; Xiang, GAO; Tingfeng, MING; Yumin, WANG; Fan, ZHOU; Feifei, LONG; Qing, ZHUANG; EAST Team
2018-02-01
A high-speed vacuum ultraviolet (VUV) imaging telescope system has been developed to measure the edge plasma emission (including the pedestal region) in the Experimental Advanced Superconducting Tokamak (EAST). The key optics of the high-speed VUV imaging system consists of three parts: an inverse Schwarzschild-type telescope, a micro-channel plate (MCP) and a visible imaging high-speed camera. The VUV imaging system has been operated routinely in the 2016 EAST experiment campaign. The dynamics of the two-dimensional (2D) images of magnetohydrodynamic (MHD) instabilities, such as edge localized modes (ELMs), tearing-like modes and disruptions, have been observed using this system. The related VUV images are presented in this paper, and it indicates the VUV imaging system is a potential tool which can be applied successfully in various plasma conditions.
The moderate resolution imaging spectrometer (MODIS) science and data system requirements
NASA Technical Reports Server (NTRS)
Ardanuy, Philip E.; Han, Daesoo; Salomonson, Vincent V.
1991-01-01
The Moderate Resolution Imaging Spectrometer (MODIS) has been designated as a facility instrument on the first NASA polar orbiting platform as part of the Earth Observing System (EOS) and is scheduled for launch in the late 1990s. The near-global daily coverage of MODIS, combined with its continuous operation, broad spectral coverage, and relatively high spatial resolution, makes it central to the objectives of EOS. The development, implementation, production, and validation of the core MODIS data products define a set of functional, performance, and operational requirements on the data system that operate between the sensor measurements and the data products supplied to the user community. The science requirements guiding the processing of MODIS data are reviewed, and the aspects of an operations concept for the production of data products from MODIS for use by the scientific community are discussed.
Santitissadeekorn, N; Bollt, E M
2007-06-01
In this paper, we present an approach to approximate the Frobenius-Perron transfer operator from a sequence of time-ordered images, that is, a movie dataset. Unlike time-series data, successive images do not provide a direct access to a trajectory of a point in a phase space; more precisely, a pixel in an image plane. Therefore, we reconstruct the velocity field from image sequences based on the infinitesimal generator of the Frobenius-Perron operator. Moreover, we relate this problem to the well-known optical flow problem from the computer vision community and we validate the continuity equation derived from the infinitesimal operator as a constraint equation for the optical flow problem. Once the vector field and then a discrete transfer operator are found, then, in addition, we present a graph modularity method as a tool to discover basin structure in the phase space. Together with a tool to reconstruct a velocity field, this graph-based partition method provides us with a way to study transport behavior and other ergodic properties of measurable dynamical systems captured only through image sequences.
EDITORIAL: Imaging Systems and Techniques Imaging Systems and Techniques
NASA Astrophysics Data System (ADS)
Giakos, George; Yang, Wuqiang; Petrou, M.; Nikita, K. S.; Pastorino, M.; Amanatiadis, A.; Zentai, G.
2011-10-01
This special feature on Imaging Systems and Techniques comprises 27 technical papers, covering essential facets in imaging systems and techniques both in theory and applications, from research groups spanning three different continents. It mainly contains peer-reviewed articles from the IEEE International Conference on Imaging Systems and Techniques (IST 2011), held in Thessaloniki, Greece, as well a number of articles relevant to the scope of this issue. The multifaceted field of imaging requires drastic adaptation to the rapid changes in our society, economy, environment, and the technological revolution; there is an urgent need to address and propose dynamic and innovative solutions to problems that tend to be either complex and static or rapidly evolving with a lot of unknowns. For instance, exploration of the engineering and physical principles of new imaging systems and techniques for medical applications, remote sensing, monitoring of space resources and enhanced awareness, exploration and management of natural resources, and environmental monitoring, are some of the areas that need to be addressed with urgency. Similarly, the development of efficient medical imaging techniques capable of providing physiological information at the molecular level is another important area of research. Advanced metabolic and functional imaging techniques, operating on multiple physical principles, using high resolution and high selectivity nanoimaging techniques, can play an important role in the diagnosis and treatment of cancer, as well as provide efficient drug-delivery imaging solutions for disease treatment with increased sensitivity and specificity. On the other hand, technical advances in the development of efficient digital imaging systems and techniques and tomographic devices operating on electric impedance tomography, computed tomography, single-photon emission and positron emission tomography detection principles are anticipated to have a significant impact on a wide spectrum of technological areas, such as medical imaging, pharmaceutical industry, analytical instrumentation, aerospace, remote sensing, lidars and ladars, surveillance, national defense, corrosion imaging and monitoring, sub-terrestrial and marine imaging. The complexity of the involved imaging scenarios, and demanding design parameters such as speed, signal-to-noise ratio, high specificity, high contrast and spatial resolution, high-scatter rejection, complex background and harsh environment, necessitate the development of a multifunctional, scalable and efficient imaging suite of sensors, solutions driven by innovation, operating on diverse detection and imaging principles. Finally, pattern recognition and image processing algorithms can significantly contribute to enhanced detection and imaging, including object classification, clustering, feature selection, texture analysis, segmentation, image compression and color representation under complex imaging scenarios, with applications in medical imaging, remote sensing, aerospace, radars, defense and homeland security. We feel confident that the exciting new contributions of this special feature on Imaging Systems and Techniques will appeal to the technical community. We would like to thank all authors as well as all anonymous reviewers and the MST Editorial Board, Publisher and staff for their tremendous efforts and invaluable support to enhance the quality of this significant endeavor.
Visual information mining in remote sensing image archives
NASA Astrophysics Data System (ADS)
Pelizzari, Andrea; Descargues, Vincent; Datcu, Mihai P.
2002-01-01
The present article focuses on the development of interactive exploratory tools for visually mining the image content in large remote sensing archives. Two aspects are treated: the iconic visualization of the global information in the archive and the progressive visualization of the image details. The proposed methods are integrated in the Image Information Mining (I2M) system. The images and image structure in the I2M system are indexed based on a probabilistic approach. The resulting links are managed by a relational data base. Both the intrinsic complexity of the observed images and the diversity of user requests result in a great number of associations in the data base. Thus new tools have been designed to visualize, in iconic representation the relationships created during a query or information mining operation: the visualization of the query results positioned on the geographical map, quick-looks gallery, visualization of the measure of goodness of the query, visualization of the image space for statistical evaluation purposes. Additionally the I2M system is enhanced with progressive detail visualization in order to allow better access for operator inspection. I2M is a three-tier Java architecture and is optimized for the Internet.
NASA Astrophysics Data System (ADS)
Gong, Ren Hui; Jenkins, Brad; Sze, Raymond W.; Yaniv, Ziv
2014-03-01
The skills required for obtaining informative x-ray fluoroscopy images are currently acquired while trainees provide clinical care. As a consequence, trainees and patients are exposed to higher doses of radiation. Use of simulation has the potential to reduce this radiation exposure by enabling trainees to improve their skills in a safe environment prior to treating patients. We describe a low cost, high fidelity, fluoroscopy simulation system. Our system enables operators to practice their skills using the clinical device and simulated x-rays of a virtual patient. The patient is represented using a set of temporal Computed Tomography (CT) images, corresponding to the underlying dynamic processes. Simulated x-ray images, digitally reconstructed radiographs (DRRs), are generated from the CTs using ray-casting with customizable machine specific imaging parameters. To establish the spatial relationship between the CT and the fluoroscopy device, the CT is virtually attached to a patient phantom and a web camera is used to track the phantom's pose. The camera is mounted on the fluoroscope's intensifier and the relationship between it and the x-ray source is obtained via calibration. To control image acquisition the operator moves the fluoroscope as in normal operation mode. Control of zoom, collimation and image save is done using a keypad mounted alongside the device's control panel. Implementation is based on the Image-Guided Surgery Toolkit (IGSTK), and the use of the graphics processing unit (GPU) for accelerated image generation. Our system was evaluated by 11 clinicians and was found to be sufficiently realistic for training purposes.
Multiple image encryption scheme based on pixel exchange operation and vector decomposition
NASA Astrophysics Data System (ADS)
Xiong, Y.; Quan, C.; Tay, C. J.
2018-02-01
We propose a new multiple image encryption scheme based on a pixel exchange operation and a basic vector decomposition in Fourier domain. In this algorithm, original images are imported via a pixel exchange operator, from which scrambled images and pixel position matrices are obtained. Scrambled images encrypted into phase information are imported using the proposed algorithm and phase keys are obtained from the difference between scrambled images and synthesized vectors in a charge-coupled device (CCD) plane. The final synthesized vector is used as an input in a random phase encoding (DRPE) scheme. In the proposed encryption scheme, pixel position matrices and phase keys serve as additional private keys to enhance the security of the cryptosystem which is based on a 4-f system. Numerical simulations are presented to demonstrate the feasibility and robustness of the proposed encryption scheme.
Soft X-ray Foucault test: A path to diffraction-limited imaging
NASA Astrophysics Data System (ADS)
Ray-Chaudhuri, A. K.; Ng, W.; Liang, S.; Cerrina, F.
1994-08-01
We present the development of a soft X-ray Foucault test capable of characterizing the imaging properties of a soft X-ray optical system at its operational wavelength and its operational configuration. This optical test enables direct visual inspection of imaging aberrations and provides real-time feedback for the alignment of high resolution soft X-ray optical systems. A first application of this optical test was carried out on a Mo-Si multilayer-coated Schwarzschild objective as part of the MAXIMUM project. Results from the alignment procedure are presented as well as the possibility for testing in the hard X-ray regime.
36 CFR § 1194.21 - Software applications and operating systems.
Code of Federal Regulations, 2013 CFR
2013-07-01
... an image represents a program element, the information conveyed by the image must also be available in text. (e) When bitmap images are used to identify controls, status indicators, or other programmatic elements, the meaning assigned to those images shall be consistent throughout an application's...
36 CFR 1194.21 - Software applications and operating systems.
Code of Federal Regulations, 2012 CFR
2012-07-01
... an image represents a program element, the information conveyed by the image must also be available in text. (e) When bitmap images are used to identify controls, status indicators, or other programmatic elements, the meaning assigned to those images shall be consistent throughout an application's...
36 CFR 1194.21 - Software applications and operating systems.
Code of Federal Regulations, 2014 CFR
2014-07-01
... an image represents a program element, the information conveyed by the image must also be available in text. (e) When bitmap images are used to identify controls, status indicators, or other programmatic elements, the meaning assigned to those images shall be consistent throughout an application's...
36 CFR 1194.21 - Software applications and operating systems.
Code of Federal Regulations, 2010 CFR
2010-07-01
... an image represents a program element, the information conveyed by the image must also be available in text. (e) When bitmap images are used to identify controls, status indicators, or other programmatic elements, the meaning assigned to those images shall be consistent throughout an application's...
36 CFR 1194.21 - Software applications and operating systems.
Code of Federal Regulations, 2011 CFR
2011-07-01
... an image represents a program element, the information conveyed by the image must also be available in text. (e) When bitmap images are used to identify controls, status indicators, or other programmatic elements, the meaning assigned to those images shall be consistent throughout an application's...
Image processing of metal surface with structured light
NASA Astrophysics Data System (ADS)
Luo, Cong; Feng, Chang; Wang, Congzheng
2014-09-01
In structured light vision measurement system, the ideal image of structured light strip, in addition to black background , contains only the gray information of the position of the stripe. However, the actual image contains image noise, complex background and so on, which does not belong to the stripe, and it will cause interference to useful information. To extract the stripe center of mental surface accurately, a new processing method was presented. Through adaptive median filtering, the noise can be preliminary removed, and the noise which introduced by CCD camera and measured environment can be further removed with difference image method. To highlight fine details and enhance the blurred regions between the stripe and noise, the sharping algorithm is used which combine the best features of Laplacian operator and Sobel operator. Morphological opening operation and closing operation are used to compensate the loss of information.Experimental results show that this method is effective in the image processing, not only to restrain the information but also heighten contrast. It is beneficial for the following processing.
Real-time advanced spinal surgery via visible patient model and augmented reality system.
Wu, Jing-Ren; Wang, Min-Liang; Liu, Kai-Che; Hu, Ming-Hsien; Lee, Pei-Yuan
2014-03-01
This paper presents an advanced augmented reality system for spinal surgery assistance, and develops entry-point guidance prior to vertebroplasty spinal surgery. Based on image-based marker detection and tracking, the proposed camera-projector system superimposes pre-operative 3-D images onto patients. The patients' preoperative 3-D image model is registered by projecting it onto the patient such that the synthetic 3-D model merges with the real patient image, enabling the surgeon to see through the patients' anatomy. The proposed method is much simpler than heavy and computationally challenging navigation systems, and also reduces radiation exposure. The system is experimentally tested on a preoperative 3D model, dummy patient model and animal cadaver model. The feasibility and accuracy of the proposed system is verified on three patients undergoing spinal surgery in the operating theater. The results of these clinical trials are extremely promising, with surgeons reporting favorably on the reduced time of finding a suitable entry point and reduced radiation dose to patients. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
JANUS: A Compilation System for Balancing Parallelism and Performance in OpenVX
NASA Astrophysics Data System (ADS)
Omidian, Hossein; Lemieux, Guy G. F.
2018-04-01
Embedded systems typically do not have enough on-chip memory for entire an image buffer. Programming systems like OpenCV operate on entire image frames at each step, making them use excessive memory bandwidth and power. In contrast, the paradigm used by OpenVX is much more efficient; it uses image tiling, and the compilation system is allowed to analyze and optimize the operation sequence, specified as a compute graph, before doing any pixel processing. In this work, we are building a compilation system for OpenVX that can analyze and optimize the compute graph to take advantage of parallel resources in many-core systems or FPGAs. Using a database of prewritten OpenVX kernels, it automatically adjusts the image tile size as well as using kernel duplication and coalescing to meet a defined area (resource) target, or to meet a specified throughput target. This allows a single compute graph to target implementations with a wide range of performance needs or capabilities, e.g. from handheld to datacenter, that use minimal resources and power to reach the performance target.
Nanoimaging using soft X-ray and EUV laser-plasma sources
NASA Astrophysics Data System (ADS)
Wachulak, Przemyslaw; Torrisi, Alfio; Ayele, Mesfin; Bartnik, Andrzej; Czwartos, Joanna; Węgrzyński, Łukasz; Fok, Tomasz; Fiedorowicz, Henryk
2018-01-01
In this work we present three experimental, compact desk-top imaging systems: SXR and EUV full field microscopes and the SXR contact microscope. The systems are based on laser-plasma EUV and SXR sources based on a double stream gas puff target. The EUV and SXR full field microscopes, operating at 13.8 nm and 2.88 nm wavelengths are capable of imaging nanostructures with a sub-50 nm spatial resolution and short (seconds) exposure times. The SXR contact microscope operates in the "water-window" spectral range and produces an imprint of the internal structure of the imaged sample in a thin layer of SXR sensitive photoresist. Applications of such desk-top EUV and SXR microscopes, mostly for biological samples (CT26 fibroblast cells and Keratinocytes) are also presented. Details about the sources, the microscopes as well as the imaging results for various objects will be presented and discussed. The development of such compact imaging systems may be important to the new research related to biological, material science and nanotechnology applications.
Ranging Apparatus and Method Implementing Stereo Vision System
NASA Technical Reports Server (NTRS)
Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.
An enhanced narrow-band imaging method for the microvessel detection
NASA Astrophysics Data System (ADS)
Yu, Feng; Song, Enmin; Liu, Hong; Wan, Youming; Zhu, Jun; Hung, Chih-Cheng
2018-02-01
A medical endoscope system combined with the narrow-band imaging (NBI), has been shown to be a superior diagnostic tool for early cancer detection. The NBI can reveal the morphologic changes of microvessels in the superficial cancer. In order to improve the conspicuousness of microvessel texture, we propose an enhanced NBI method to improve the conspicuousness of endoscopic images. To obtain the more conspicuous narrow-band images, we use the edge operator to extract the edge information of the narrow-band blue and green images, and give a weight to the extracted edges. Then, the weighted edges are fused with the narrow-band blue and green images. Finally, the displayed endoscopic images are reconstructed with the enhanced narrow-band images. In addition, we evaluate the performance of enhanced narrow-band images with different edge operators. Experimental results indicate that the Sobel and Canny operators achieve the best performance of all. Compared with traditional NBI method of Olympus company, our proposed method has more conspicuous texture of microvessel.
Real-time intraoperative fluorescence imaging system using light-absorption correction.
Themelis, George; Yoo, Jung Sun; Soh, Kwang-Sup; Schulz, Ralf; Ntziachristos, Vasilis
2009-01-01
We present a novel fluorescence imaging system developed for real-time interventional imaging applications. The system implements a correction scheme that improves the accuracy of epi-illumination fluorescence images for light intensity variation in tissues. The implementation is based on the use of three cameras operating in parallel, utilizing a common lens, which allows for the concurrent collection of color, fluorescence, and light attenuation images at the excitation wavelength from the same field of view. The correction is based on a ratio approach of fluorescence over light attenuation images. Color images and video is used for surgical guidance and for registration with the corrected fluorescence images. We showcase the performance metrics of this system on phantoms and animals, and discuss the advantages over conventional epi-illumination systems developed for real-time applications and the limits of validity of corrected epi-illumination fluorescence imaging.
Laptop Computer - Based Facial Recognition System Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. A. Cain; G. B. Singleton
2001-03-01
The objective of this project was to assess the performance of the leading commercial-off-the-shelf (COTS) facial recognition software package when used as a laptop application. We performed the assessment to determine the system's usefulness for enrolling facial images in a database from remote locations and conducting real-time searches against a database of previously enrolled images. The assessment involved creating a database of 40 images and conducting 2 series of tests to determine the product's ability to recognize and match subject faces under varying conditions. This report describes the test results and includes a description of the factors affecting the results.more » After an extensive market survey, we selected Visionics' FaceIt{reg_sign} software package for evaluation and a review of the Facial Recognition Vendor Test 2000 (FRVT 2000). This test was co-sponsored by the US Department of Defense (DOD) Counterdrug Technology Development Program Office, the National Institute of Justice, and the Defense Advanced Research Projects Agency (DARPA). Administered in May-June 2000, the FRVT 2000 assessed the capabilities of facial recognition systems that were currently available for purchase on the US market. Our selection of this Visionics product does not indicate that it is the ''best'' facial recognition software package for all uses. It was the most appropriate package based on the specific applications and requirements for this specific application. In this assessment, the system configuration was evaluated for effectiveness in identifying individuals by searching for facial images captured from video displays against those stored in a facial image database. An additional criterion was that the system be capable of operating discretely. For this application, an operational facial recognition system would consist of one central computer hosting the master image database with multiple standalone systems configured with duplicates of the master operating in remote locations. Remote users could perform real-time searches where network connectivity is not available. As images are enrolled at the remote locations, periodic database synchronization is necessary.« less
NASA Technical Reports Server (NTRS)
Barnes, Heidi L. (Inventor); Smith, Harvey S. (Inventor)
1998-01-01
A system for imaging a flame and the background scene is discussed. The flame imaging system consists of two charge-coupled-device (CCD) cameras. One camera uses a 800 nm long pass filter which during overcast conditions blocks sufficient background light so the hydrogen flame is brighter than the background light, and the second CCD camera uses a 1100 nm long pass filter, which blocks the solar background in full sunshine conditions such that the hydrogen flame is brighter than the solar background. Two electronic viewfinders convert the signal from the cameras into a visible image. The operator can select the appropriate filtered camera to use depending on the current light conditions. In addition, a narrow band pass filtered InGaAs sensor at 1360 nm triggers an audible alarm and a flashing LED if the sensor detects a flame, providing additional flame detection so the operator does not overlook a small flame.
First demonstration of a vehicle mounted 250GHz real time passive imager
NASA Astrophysics Data System (ADS)
Mann, Chris
2009-05-01
This paper describes the design and performance of a ruggedized passive Terahertz imager, the frequency of operation is a 40GHz band centred around 250GHz. This system has been specifically targeted at vehicle mounted operation, outdoors in extreme environments. The unit incorporates temperature stabilization along with an anti-vibration chassis and is sealed to allow it to be used in a dusty environment. Within the system, a 250GHz heterodyne detector array is mated with optics and scanner to allow real time imaging out to 100 meters. First applications are envisaged to be stand-off, person borne IED detection to 30 meters but the unique properties in this frequency band present other potential uses such as seeing through smoke and fog. The possibility for use as a landing aid is discussed. A detailed description of the system design and video examples of typical imaging output will be presented.
NASA Astrophysics Data System (ADS)
Jakubovic, Raphael; Gupta, Shuarya; Guha, Daipayan; Mainprize, Todd; Yang, Victor X. D.
2017-02-01
Cranial neurosurgical procedures are especially delicate considering that the surgeon must localize the subsurface anatomy with limited exposure and without the ability to see beyond the surface of the surgical field. Surgical accuracy is imperative as even minor surgical errors can cause major neurological deficits. Traditionally surgical precision was highly dependent on surgical skill. However, the introduction of intraoperative surgical navigation has shifted the paradigm to become the current standard of care for cranial neurosurgery. Intra-operative image guided navigation systems are currently used to allow the surgeon to visualize the three-dimensional subsurface anatomy using pre-acquired computed tomography (CT) or magnetic resonance (MR) images. The patient anatomy is fused to the pre-acquired images using various registration techniques and surgical tools are typically localized using optical tracking methods. Although these techniques positively impact complication rates, surgical accuracy is limited by the accuracy of the navigation system and as such quantification of surgical error is required. While many different measures of registration accuracy have been presented true navigation accuracy can only be quantified post-operatively by comparing a ground truth landmark to the intra-operative visualization. In this study we quantified the accuracy of cranial neurosurgical procedures using a novel optical surface imaging navigation system to visualize the three-dimensional anatomy of the surface anatomy. A tracked probe was placed on the screws of cranial fixation plates during surgery and the reported position of the centre of the screw was compared to the co-ordinates of the post-operative CT or MR images, thus quantifying cranial neurosurgical error.
Information fusion for diabetic retinopathy CAD in digital color fundus photographs.
Niemeijer, Meindert; Abramoff, Michael D; van Ginneken, Bram
2009-05-01
The purpose of computer-aided detection or diagnosis (CAD) technology has so far been to serve as a second reader. If, however, all relevant lesions in an image can be detected by CAD algorithms, use of CAD for automatic reading or prescreening may become feasible. This work addresses the question how to fuse information from multiple CAD algorithms, operating on multiple images that comprise an exam, to determine a likelihood that the exam is normal and would not require further inspection by human operators. We focus on retinal image screening for diabetic retinopathy, a common complication of diabetes. Current CAD systems are not designed to automatically evaluate complete exams consisting of multiple images for which several detection algorithm output sets are available. Information fusion will potentially play a crucial role in enabling the application of CAD technology to the automatic screening problem. Several different fusion methods are proposed and their effect on the performance of a complete comprehensive automatic diabetic retinopathy screening system is evaluated. Experiments show that the choice of fusion method can have a large impact on system performance. The complete system was evaluated on a set of 15,000 exams (60,000 images). The best performing fusion method obtained an area under the receiver operator characteristic curve of 0.881. This indicates that automated prescreening could be applied in diabetic retinopathy screening programs.
Monitoring Pest Insect Traps by Means of Low-Power Image Sensor Technologies
López, Otoniel; Rach, Miguel Martinez; Migallon, Hector; Malumbres, Manuel P.; Bonastre, Alberto; Serrano, Juan J.
2012-01-01
Monitoring pest insect populations is currently a key issue in agriculture and forestry protection. At the farm level, human operators typically must perform periodical surveys of the traps disseminated through the field. This is a labor-, time- and cost-consuming activity, in particular for large plantations or large forestry areas, so it would be of great advantage to have an affordable system capable of doing this task automatically in an accurate and a more efficient way. This paper proposes an autonomous monitoring system based on a low-cost image sensor that it is able to capture and send images of the trap contents to a remote control station with the periodicity demanded by the trapping application. Our autonomous monitoring system will be able to cover large areas with very low energy consumption. This issue would be the main key point in our study; since the operational live of the overall monitoring system should be extended to months of continuous operation without any kind of maintenance (i.e., battery replacement). The images delivered by image sensors would be time-stamped and processed in the control station to get the number of individuals found at each trap. All the information would be conveniently stored at the control station, and accessible via Internet by means of available network services at control station (WiFi, WiMax, 3G/4G, etc.). PMID:23202232
Monitoring pest insect traps by means of low-power image sensor technologies.
López, Otoniel; Rach, Miguel Martinez; Migallon, Hector; Malumbres, Manuel P; Bonastre, Alberto; Serrano, Juan J
2012-11-13
Monitoring pest insect populations is currently a key issue in agriculture and forestry protection. At the farm level, human operators typically must perform periodical surveys of the traps disseminated through the field. This is a labor-, time- and cost-consuming activity, in particular for large plantations or large forestry areas, so it would be of great advantage to have an affordable system capable of doing this task automatically in an accurate and a more efficient way. This paper proposes an autonomous monitoring system based on a low-cost image sensor that it is able to capture and send images of the trap contents to a remote control station with the periodicity demanded by the trapping application. Our autonomous monitoring system will be able to cover large areas with very low energy consumption. This issue would be the main key point in our study; since the operational live of the overall monitoring system should be extended to months of continuous operation without any kind of maintenance (i.e., battery replacement). The images delivered by image sensors would be time-stamped and processed in the control station to get the number of individuals found at each trap. All the information would be conveniently stored at the control station, and accessible via Internet by means of available network services at control station (WiFi, WiMax, 3G/4G, etc.).
Zhu, Ming; Chai, Gang; Lin, Li; Xin, Yu; Tan, Andy; Bogari, Melia; Zhang, Yan; Li, Qingfeng
2016-12-01
Augmented reality (AR) technology can superimpose the virtual image generated by computer onto the real operating field to present an integral image to enhance surgical safety. The purpose of our study is to develop a novel AR-based navigation system for craniofacial surgery. We focus on orbital hypertelorism correction, because the surgery requires high preciseness and is considered tough even for senior craniofacial surgeon. Twelve patients with orbital hypertelorism were selected. The preoperative computed tomography data were imported into 3-dimensional platform for preoperational design. The position and orientation of virtual information and real world were adjusted by image registration process. The AR toolkits were used to realize the integral image. Afterward, computed tomography was also performed after operation for comparing the difference between preoperational plan and actual operational outcome. Our AR-based navigation system was successfully used in these patients, directly displaying 3-dimensional navigational information onto the surgical field. They all achieved a better appearance by the guidance of navigation image. The difference in interdacryon distance and the dacryon point of each side appear no significant (P > 0.05) between preoperational plan and actual surgical outcome. This study reports on an effective visualized approach for guiding orbital hypertelorism correction. Our AR-based navigation system may lay a foundation for craniofacial surgery navigation. The AR technology could be considered as a helpful tool for precise osteotomy in craniofacial surgery.
Robotically assisted ureteroscopy for kidney exploration
NASA Astrophysics Data System (ADS)
Talari, Hadi F.; Monfaredi, Reza; Wilson, Emmanuel; Blum, Emily; Bayne, Christopher; Peters, Craig; Zhang, Anlin; Cleary, Kevin
2017-03-01
Ureteroscopy is a minimally invasive procedure for diagnosis and treatment of urinary tract pathology. Ergonomic and visualization challenges as well as radiation exposure are limitations to conventional ureteroscopy. Therefore, we have developed a robotic system to "power drive" a flexible ureteroscope with 3D tip tracking and pre-operative image overlay. The proposed system was evaluated using a kidney phantom registered to pre-operative MR images. Initial experiments show the potential of the device to provide additional assistance, precision, and guidance during urology procedures.
Operational Analysis of Time-Optimal Maneuvering for Imaging Spacecraft
2013-03-01
imaging spacecraft. The analysis is facilitated through the use of AGI’s Systems Tool Kit ( STK ) software. An Analytic Hierarchy Process (AHP)-based...the Singapore-developed X-SAT imaging spacecraft. The analysis is facilitated through the use of AGI’s Systems Tool Kit ( STK ) software. An Analytic...89 B. FUTURE WORK................................................................................. 90 APPENDIX A. STK DATA AND BENEFIT
The Pan-STARRS PS1 Image Processing Pipeline
NASA Astrophysics Data System (ADS)
Magnier, E.
The Pan-STARRS PS1 Image Processing Pipeline (IPP) performs the image processing and data analysis tasks needed to enable the scientific use of the images obtained by the Pan-STARRS PS1 prototype telescope. The primary goals of the IPP are to process the science images from the Pan-STARRS telescopes and make the results available to other systems within Pan-STARRS. It also is responsible for combining all of the science images in a given filter into a single representation of the non-variable component of the night sky defined as the "Static Sky". To achieve these goals, the IPP also performs other analysis functions to generate the calibrations needed in the science image processing, and to occasionally use the derived data to generate improved astrometric and photometric reference catalogs. It also provides the infrastructure needed to store the incoming data and the resulting data products. The IPP inherits lessons learned, and in some cases code and prototype code, from several other astronomy image analysis systems, including Imcat (Kaiser), the Sloan Digital Sky Survey (REF), the Elixir system (Magnier & Cuillandre), and Vista (Tonry). Imcat and Vista have a large number of robust image processing functions. SDSS has demonstrated a working analysis pipeline and large-scale databasesystem for a dedicated project. The Elixir system has demonstrated an automatic image processing system and an object database system for operational usage. This talk will present an overview of the IPP architecture, functional flow, code development structure, and selected analysis algorithms. Also discussed is the HW highly parallel HW configuration necessary to support PS1 operational requirements. Finally, results are presented of the processing of images collected during PS1 early commissioning tasks utilizing the Pan-STARRS Test Camera #3.
An arc control and protection system for the JET lower hybrid antenna based on an imaging system.
Figueiredo, J; Mailloux, J; Kirov, K; Kinna, D; Stamp, M; Devaux, S; Arnoux, G; Edwards, J S; Stephen, A V; McCullen, P; Hogben, C
2014-11-01
Arcs are the potentially most dangerous events related to Lower Hybrid (LH) antenna operation. If left uncontrolled they can produce damage and cause plasma disruption by impurity influx. To address this issue an arc real time control and protection imaging system for the Joint European Torus (JET) LH antenna has been implemented. The LH system is one of the additional heating systems at JET. It comprises 24 microwave generators (klystrons, operating at 3.7 GHz) providing up to 5 MW of heating and current drive to the JET plasma. This is done through an antenna composed of an array of waveguides facing the plasma. The protection system presented here is based primarily on an imaging arc detection and real time control system. It has adapted the ITER like wall hotspot protection system using an identical CCD camera and real time image processing unit. A filter has been installed to avoid saturation and spurious system triggers caused by ionization light. The antenna is divided in 24 Regions Of Interest (ROIs) each one corresponding to one klystron. If an arc precursor is detected in a ROI, power is reduced locally with subsequent potential damage and plasma disruption avoided. The power is subsequently reinstated if, during a defined interval of time, arcing is confirmed not to be present by image analysis. This system was successfully commissioned during the restart phase and beginning of the 2013 scientific campaign. Since its installation and commissioning, arcs and related phenomena have been prevented. In this contribution we briefly describe the camera, image processing, and real time control systems. Most importantly, we demonstrate that an LH antenna arc protection system based on CCD camera imaging systems works. Examples of both controlled and uncontrolled LH arc events and their consequences are shown.
NASA Technical Reports Server (NTRS)
Bracken, P. A.; Dalton, J. T.; Quann, J. J.; Billingsley, J. B.
1978-01-01
The Atmospheric and Oceanographic Information Processing System (AOIPS) was developed to help applications investigators perform required interactive image data analysis rapidly and to eliminate the inefficiencies and problems associated with batch operation. This paper describes the configuration and processing capabilities of AOIPS and presents unique subsystems for displaying, analyzing, storing, and manipulating digital image data. Applications of AOIPS to research investigations in meteorology and earth resources are featured.
Video System Highlights Hydrogen Fires
NASA Technical Reports Server (NTRS)
Youngquist, Robert C.; Gleman, Stuart M.; Moerk, John S.
1992-01-01
Video system combines images from visible spectrum and from three bands in infrared spectrum to produce color-coded display in which hydrogen fires distinguished from other sources of heat. Includes linear array of 64 discrete lead selenide mid-infrared detectors operating at room temperature. Images overlaid on black and white image of same scene from standard commercial video camera. In final image, hydrogen fires appear red; carbon-based fires, blue; and other hot objects, mainly green and combinations of green and red. Where no thermal source present, image remains in black and white. System enables high degree of discrimination between hydrogen flames and other thermal emitters.
Invited Article: Digital beam-forming imaging riometer systems
NASA Astrophysics Data System (ADS)
Honary, Farideh; Marple, Steve R.; Barratt, Keith; Chapman, Peter; Grill, Martin; Nielsen, Erling
2011-03-01
The design and operation of a new generation of digital imaging riometer systems developed by Lancaster University are presented. In the heart of the digital imaging riometer is a field-programmable gate array (FPGA), which is used for the digital signal processing and digital beam forming, completely replacing the analog Butler matrices which have been used in previous designs. The reconfigurable nature of the FPGA has been exploited to produce tools for remote system testing and diagnosis which have proven extremely useful for operation in remote locations such as the Arctic and Antarctic. Different FPGA programs enable different instrument configurations, including a 4 × 4 antenna filled array (producing 4 × 4 beams), an 8 × 8 antenna filled array (producing 7 × 7 beams), and a Mills cross system utilizing 63 antennas producing 556 usable beams. The concept of using a Mills cross antenna array for riometry has been successfully demonstrated for the first time. The digital beam forming has been validated by comparing the received signal power from cosmic radio sources with results predicted from the theoretical beam radiation pattern. The performances of four digital imaging riometer systems are compared against each other and a traditional imaging riometer utilizing analog Butler matrices. The comparison shows that digital imaging riometer systems, with independent receivers for each antenna, can obtain much better measurement precision for filled arrays or much higher spatial resolution for the Mills cross configuration when compared to existing imaging riometer systems.
Automated design of image operators that detect interest points.
Trujillo, Leonardo; Olague, Gustavo
2008-01-01
This work describes how evolutionary computation can be used to synthesize low-level image operators that detect interesting points on digital images. Interest point detection is an essential part of many modern computer vision systems that solve tasks such as object recognition, stereo correspondence, and image indexing, to name but a few. The design of the specialized operators is posed as an optimization/search problem that is solved with genetic programming (GP), a strategy still mostly unexplored by the computer vision community. The proposed approach automatically synthesizes operators that are competitive with state-of-the-art designs, taking into account an operator's geometric stability and the global separability of detected points during fitness evaluation. The GP search space is defined using simple primitive operations that are commonly found in point detectors proposed by the vision community. The experiments described in this paper extend previous results (Trujillo and Olague, 2006a,b) by presenting 15 new operators that were synthesized through the GP-based search. Some of the synthesized operators can be regarded as improved manmade designs because they employ well-known image processing techniques and achieve highly competitive performance. On the other hand, since the GP search also generates what can be considered as unconventional operators for point detection, these results provide a new perspective to feature extraction research.
Overview of the land analysis system (LAS)
Quirk, Bruce K.; Olseson, Lyndon R.
1987-01-01
The Land Analysis System (LAS) is a fully integrated digital analysis system designed to support remote sensing, image processing, and geographic information systems research. LAS is being developed through a cooperative effort between the National Aeronautics and Space Administration Goddard Space Flight Center and the U. S. Geological Survey Earth Resources Observation Systems (EROS) Data Center. LAS has over 275 analysis modules capable to performing input and output, radiometric correction, geometric registration, signal processing, logical operations, data transformation, classification, spatial analysis, nominal filtering, conversion between raster and vector data types, and display manipulation of image and ancillary data. LAS is currently implant using the Transportable Applications Executive (TAE). While TAE was designed primarily to be transportable, it still provides the necessary components for a standard user interface, terminal handling, input and output services, display management, and intersystem communications. With TAE the analyst uses the same interface to the processing modules regardless of the host computer or operating system. LAS was originally implemented at EROS on a Digital Equipment Corporation computer system under the Virtual Memorial System operating system with DeAnza displays and is presently being converted to run on a Gould Power Node and Sun workstation under the Berkeley System Distribution UNIX operating system.
A post-processing system for automated rectification and registration of spaceborne SAR imagery
NASA Technical Reports Server (NTRS)
Curlander, John C.; Kwok, Ronald; Pang, Shirley S.
1987-01-01
An automated post-processing system has been developed that interfaces with the raw image output of the operational digital SAR correlator. This system is designed for optimal efficiency by using advanced signal processing hardware and an algorithm that requires no operator interaction, such as the determination of ground control points. The standard output is a geocoded image product (i.e. resampled to a specified map projection). The system is capable of producing multiframe mosaics for large-scale mapping by combining images in both the along-track direction and adjacent cross-track swaths from ascending and descending passes over the same target area. The output products have absolute location uncertainty of less than 50 m and relative distortion (scale factor and skew) of less than 0.1 per cent relative to local variations from the assumed geoid.
Micijevic, Esad; Morfitt, Ron
2010-01-01
Systematic characterization and calibration of the Landsat sensors and the assessment of image data quality are performed using the Image Assessment System (IAS). The IAS was first introduced as an element of the Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) ground segment and recently extended to Landsat 4 (L4) and 5 (L5) Thematic Mappers (TM) and Multispectral Sensors (MSS) on-board the Landsat 1-5 satellites. In preparation for the Landsat Data Continuity Mission (LDCM), the IAS was developed for the Earth Observer 1 (EO-1) Advanced Land Imager (ALI) with a capability to assess pushbroom sensors. This paper describes the LDCM version of the IAS and how it relates to unique calibration and validation attributes of its on-board imaging sensors. The LDCM IAS system will have to handle a significantly larger number of detectors and the associated database than the previous IAS versions. An additional challenge is that the LDCM IAS must handle data from two sensors, as the LDCM products will combine the Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) spectral bands.
Mobashsher, Ahmed Toaha; Mahmoud, A.; Abbosh, A. M.
2016-01-01
Intracranial hemorrhage is a medical emergency that requires rapid detection and medication to restrict any brain damage to minimal. Here, an effective wideband microwave head imaging system for on-the-spot detection of intracranial hemorrhage is presented. The operation of the system relies on the dielectric contrast between healthy brain tissues and a hemorrhage that causes a strong microwave scattering. The system uses a compact sensing antenna, which has an ultra-wideband operation with directional radiation, and a portable, compact microwave transceiver for signal transmission and data acquisition. The collected data is processed to create a clear image of the brain using an improved back projection algorithm, which is based on a novel effective head permittivity model. The system is verified in realistic simulation and experimental environments using anatomically and electrically realistic human head phantoms. Quantitative and qualitative comparisons between the images from the proposed and existing algorithms demonstrate significant improvements in detection and localization accuracy. The radiation and thermal safety of the system are examined and verified. Initial human tests are conducted on healthy subjects with different head sizes. The reconstructed images are statistically analyzed and absence of false positive results indicate the efficacy of the proposed system in future preclinical trials. PMID:26842761
NASA Astrophysics Data System (ADS)
Mobashsher, Ahmed Toaha; Mahmoud, A.; Abbosh, A. M.
2016-02-01
Intracranial hemorrhage is a medical emergency that requires rapid detection and medication to restrict any brain damage to minimal. Here, an effective wideband microwave head imaging system for on-the-spot detection of intracranial hemorrhage is presented. The operation of the system relies on the dielectric contrast between healthy brain tissues and a hemorrhage that causes a strong microwave scattering. The system uses a compact sensing antenna, which has an ultra-wideband operation with directional radiation, and a portable, compact microwave transceiver for signal transmission and data acquisition. The collected data is processed to create a clear image of the brain using an improved back projection algorithm, which is based on a novel effective head permittivity model. The system is verified in realistic simulation and experimental environments using anatomically and electrically realistic human head phantoms. Quantitative and qualitative comparisons between the images from the proposed and existing algorithms demonstrate significant improvements in detection and localization accuracy. The radiation and thermal safety of the system are examined and verified. Initial human tests are conducted on healthy subjects with different head sizes. The reconstructed images are statistically analyzed and absence of false positive results indicate the efficacy of the proposed system in future preclinical trials.
Intelligence, mapping, and geospatial exploitation system (IMAGES)
NASA Astrophysics Data System (ADS)
Moellman, Dennis E.; Cain, Joel M.
1998-08-01
This paper provides further detail to one facet of the battlespace visualization concept described in last year's paper Battlespace Situation Awareness for Force XXI. It focuses on the National Imagery and Mapping Agency (NIMA) goal to 'provide customers seamless access to tailorable imagery, imagery intelligence, and geospatial information.' This paper describes Intelligence, Mapping, and Geospatial Exploitation System (IMAGES), an exploitation element capable of CONUS baseplant operations or field deployment to provide NIMA geospatial information collaboratively into a reconnaissance, surveillance, and target acquisition (RSTA) environment through the United States Imagery and Geospatial Information System (USIGS). In a baseplant CONUS setting IMAGES could be used to produce foundation data to support mission planning. In the field it could be directly associated with a tactical sensor receiver or ground station (e.g. UAV or UGV) to provide near real-time and mission specific RSTA to support mission execution. This paper provides IMAGES functional level design; describes the technologies, their interactions and interdependencies; and presents a notional operational scenario to illustrate the system flexibility. Using as a system backbone an intelligent software agent technology, called Open Agent ArchitectureTM (OAATM), IMAGES combines multimodal data entry, natural language understanding, and perceptual and evidential reasoning for system management. Configured to be DII COE compliant, it would utilize, to the extent possible, COTS applications software for data management, processing, fusion, exploitation, and reporting. It would also be modular, scaleable, and reconfigurable. This paper describes how the OAATM achieves data synchronization and enables the necessary level of information to be rapidly available to various command echelons for making informed decisions. The reasoning component will provide for the best information to be developed in the timeline available and it will also provide statistical pedigree data. This pedigree data provides both uncertainties associated with the information and an audit trail cataloging the raw data sources and the processing/exploitation applied to derive the final product. Collaboration provides for a close union between the information producer(s)/exploiter(s) and the information user(s) as well as between local and remote producer(s)/exploiter(s). From a military operational perspective, IMAGES is a step toward further uniting NIMA with its customers and further blurring the dividing line between operational command and control (C2) and its supporting intelligence activities. IMAGES also provides a foundation for reachback to remote data sources, data stores, application software, and computational resources for achieving 'just-in- time' information delivery -- all of which is transparent to the analyst or operator employing the system.
Machine vision for real time orbital operations
NASA Technical Reports Server (NTRS)
Vinz, Frank L.
1988-01-01
Machine vision for automation and robotic operation of Space Station era systems has the potential for increasing the efficiency of orbital servicing, repair, assembly and docking tasks. A machine vision research project is described in which a TV camera is used for inputing visual data to a computer so that image processing may be achieved for real time control of these orbital operations. A technique has resulted from this research which reduces computer memory requirements and greatly increases typical computational speed such that it has the potential for development into a real time orbital machine vision system. This technique is called AI BOSS (Analysis of Images by Box Scan and Syntax).
Application of automatic image analysis in wood science
Charles W. McMillin
1982-01-01
In this paper I describe an image analysis system and illustrate with examples the application of automatic quantitative measurement to wood science. Automatic image analysis, a powerful and relatively new technology, uses optical, video, electronic, and computer components to rapidly derive information from images with minimal operator interaction. Such instruments...
NASA Astrophysics Data System (ADS)
Coffey, Stephen; Connell, Joseph
2005-06-01
This paper presents a development platform for real-time image processing based on the ADSP-BF533 Blackfin processor and the MicroC/OS-II real-time operating system (RTOS). MicroC/OS-II is a completely portable, ROMable, pre-emptive, real-time kernel. The Blackfin Digital Signal Processors (DSPs), incorporating the Analog Devices/Intel Micro Signal Architecture (MSA), are a broad family of 16-bit fixed-point products with a dual Multiply Accumulate (MAC) core. In addition, they have a rich instruction set with variable instruction length and both DSP and MCU functionality thus making them ideal for media based applications. Using the MicroC/OS-II for task scheduling and management, the proposed system can capture and process raw RGB data from any standard 8-bit greyscale image sensor in soft real-time and then display the processed result using a simple PC graphical user interface (GUI). Additionally, the GUI allows configuration of the image capture rate and the system and core DSP clock rates thereby allowing connectivity to a selection of image sensors and memory devices. The GUI also allows selection from a set of image processing algorithms based in the embedded operating system.
Real-time photoacoustic imaging of prostate brachytherapy seeds using a clinical ultrasound system.
Kuo, Nathanael; Kang, Hyun Jae; Song, Danny Y; Kang, Jin U; Boctor, Emad M
2012-06-01
Prostate brachytherapy is a popular prostate cancer treatment option that involves the permanent implantation of radioactive seeds into the prostate. However, contemporary brachytherapy procedure is limited by the lack of an imaging system that can provide real-time seed-position feedback. While many other imaging systems have been proposed, photoacoustic imaging has emerged as a potential ideal modality to address this need, since it could easily be incorporated into the current ultrasound system used in the operating room. We present such a photoacoustic imaging system built around a clinical ultrasound system to achieve the task of visualizing and localizing seeds. We performed several experiments to analyze the effects of various parameters on the appearance of brachytherapy seeds in photoacoustic images. We also imaged multiple seeds in an ex vivo dog prostate phantom to demonstrate the possibility of using this system in a clinical setting. Although still in its infancy, these initial results of a photoacoustic imaging system for the application of prostate brachytherapy seed localization are highly promising.
[A computer-aided image diagnosis and study system].
Li, Zhangyong; Xie, Zhengxiang
2004-08-01
The revolution in information processing, particularly the digitizing of medicine, has changed the medical study, work and management. This paper reports a method to design a system for computer-aided image diagnosis and study. Combined with some good idea of graph-text system and picture archives communicate system (PACS), the system was realized and used for "prescription through computer", "managing images" and "reading images under computer and helping the diagnosis". Also typical examples were constructed in a database and used to teach the beginners. The system was developed by the visual developing tools based on object oriented programming (OOP) and was carried into operation on the Windows 9X platform. The system possesses friendly man-machine interface.
Virtual Ultrasound Guidance for Inexperienced Operators
NASA Technical Reports Server (NTRS)
Caine, Timothy; Martin, David
2012-01-01
Medical ultrasound or echocardiographic studies are highly operator-dependent and generally require lengthy training and internship to perfect. To obtain quality echocardiographic images in remote environments, such as on-orbit, remote guidance of studies has been employed. This technique involves minimal training for the user, coupled with remote guidance from an expert. When real-time communication or expert guidance is not available, a more autonomous system of guiding an inexperienced operator through an ultrasound study is needed. One example would be missions beyond low Earth orbit in which the time delay inherent with communication will make remote guidance impractical. The Virtual Ultrasound Guidance system is a combination of hardware and software. The hardware portion includes, but is not limited to, video glasses that allow hands-free, full-screen viewing. The glasses also allow the operator a substantial field of view below the glasses to view and operate the ultrasound system. The software is a comprehensive video program designed to guide an inexperienced operator through a detailed ultrasound or echocardiographic study without extensive training or guidance from the ground. The program contains a detailed description using video and audio to demonstrate equipment controls, ergonomics of scanning, study protocol, and scanning guidance, including recovery from sub-optimal images. The components used in the initial validation of the system include an Apple iPod Classic third-generation as the video source, and Myvue video glasses. Initially, the program prompts the operator to power-up the ultrasound and position the patient. The operator would put on the video glasses and attach them to the video source. After turning on both devices and the ultrasound system, the audio-video guidance would then instruct on patient positioning and scanning techniques. A detailed scanning protocol follows with descriptions and reference video of each view along with advice on technique. The program also instructs the operator regarding the types of images to store and how to overcome pitfalls in scanning. Images can be forwarded to the ground or other site when convenient. Following study completion, the video glasses, video source, and ultrasound system are powered down and stored. Virtually any equipment that can play back video can be used to play back the program. This includes a DVD player, personal computer, and some MP3 players.
21 CFR 892.2050 - Picture archiving and communications system.
Code of Federal Regulations, 2014 CFR
2014-04-01
... processing of medical images. Its hardware components may include workstations, digitizers, communications... hardcopy devices. The software components may provide functions for performing operations related to image...
21 CFR 892.2050 - Picture archiving and communications system.
Code of Federal Regulations, 2011 CFR
2011-04-01
... processing of medical images. Its hardware components may include workstations, digitizers, communications... hardcopy devices. The software components may provide functions for performing operations related to image...
21 CFR 892.2050 - Picture archiving and communications system.
Code of Federal Regulations, 2012 CFR
2012-04-01
... processing of medical images. Its hardware components may include workstations, digitizers, communications... hardcopy devices. The software components may provide functions for performing operations related to image...
21 CFR 892.2050 - Picture archiving and communications system.
Code of Federal Regulations, 2013 CFR
2013-04-01
... processing of medical images. Its hardware components may include workstations, digitizers, communications... hardcopy devices. The software components may provide functions for performing operations related to image...
Wei, Chen-Wei; Nguyen, Thu-Mai; Xia, Jinjun; Arnal, Bastien; Wong, Emily Y; Pelivanov, Ivan M; O'Donnell, Matthew
2015-02-01
Because of depth-dependent light attenuation, bulky, low-repetition-rate lasers are usually used in most photoacoustic (PA) systems to provide sufficient pulse energies to image at depth within the body. However, integrating these lasers with real-time clinical ultrasound (US) scanners has been problematic because of their size and cost. In this paper, an integrated PA/US (PAUS) imaging system is presented operating at frame rates >30 Hz. By employing a portable, low-cost, low-pulse-energy (~2 mJ/pulse), high-repetition-rate (~1 kHz), 1053-nm laser, and a rotating galvo-mirror system enabling rapid laser beam scanning over the imaging area, the approach is demonstrated for potential applications requiring a few centimeters of penetration. In particular, we demonstrate here real-time (30 Hz frame rate) imaging (by combining multiple single-shot sub-images covering the scan region) of an 18-gauge needle inserted into a piece of chicken breast with subsequent delivery of an absorptive agent at more than 1-cm depth to mimic PAUS guidance of an interventional procedure. A signal-to-noise ratio of more than 35 dB is obtained for the needle in an imaging area 2.8 × 2.8 cm (depth × lateral). Higher frame rate operation is envisioned with an optimized scanning scheme.
PI2GIS: processing image to geographical information systems, a learning tool for QGIS
NASA Astrophysics Data System (ADS)
Correia, R.; Teodoro, A.; Duarte, L.
2017-10-01
To perform an accurate interpretation of remote sensing images, it is necessary to extract information using different image processing techniques. Nowadays, it became usual to use image processing plugins to add new capabilities/functionalities integrated in Geographical Information System (GIS) software. The aim of this work was to develop an open source application to automatically process and classify remote sensing images from a set of satellite input data. The application was integrated in a GIS software (QGIS), automating several image processing steps. The use of QGIS for this purpose is justified since it is easy and quick to develop new plugins, using Python language. This plugin is inspired in the Semi-Automatic Classification Plugin (SCP) developed by Luca Congedo. SCP allows the supervised classification of remote sensing images, the calculation of vegetation indices such as NDVI (Normalized Difference Vegetation Index) and EVI (Enhanced Vegetation Index) and other image processing operations. When analysing SCP, it was realized that a set of operations, that are very useful in teaching classes of remote sensing and image processing tasks, were lacking, such as the visualization of histograms, the application of filters, different image corrections, unsupervised classification and several environmental indices computation. The new set of operations included in the PI2GIS plugin can be divided into three groups: pre-processing, processing, and classification procedures. The application was tested consider an image from Landsat 8 OLI from a North area of Portugal.
Stirling, Paul; Valsalan Mannambeth, Rejith; Soler, Agustin; Batta, Vineet; Malhotra, Rajeev Kumar; Kalairajah, Yegappan
2015-03-18
To summarise and compare currently available evidence regarding accuracy of pre-operative imaging, which is one of the key choices for surgeons contemplating patient-specific instrumentation (PSI) surgery. The MEDLINE and EMBASE medical literature databases were searched, from January 1990 to December 2013, to identify relevant studies. The data from several clinical studies was assimilated to allow appreciation and comparison of the accuracy of each modality. The overall accuracy of each modality was calculated as proportion of outliers > 3% in the coronal plane of both computerised tomography (CT) or magnetic resonance imaging (MRI). Seven clinical studies matched our inclusion criteria for comparison and were included in our study for statistical analysis. Three of these reported series using MRI and four with CT. Overall percentage of outliers > 3% in patients with CT-based PSI systems was 12.5% vs 16.9% for MRI-based systems. These results were not statistically significant. Although many studies have been undertaken to determine the ideal pre-operative imaging modality, conclusions remain speculative in the absence of long term data. Ultimately, information regarding accuracy of CT and MRI will be the main determining factor. Increased accuracy of pre-operative imaging could result in longer-term savings, and reduced accumulated dose of radiation by eliminating the need for post-operative imaging and revision surgery.
Stirling, Paul; Valsalan Mannambeth, Rejith; Soler, Agustin; Batta, Vineet; Malhotra, Rajeev Kumar; Kalairajah, Yegappan
2015-01-01
AIM: To summarise and compare currently available evidence regarding accuracy of pre-operative imaging, which is one of the key choices for surgeons contemplating patient-specific instrumentation (PSI) surgery. METHODS: The MEDLINE and EMBASE medical literature databases were searched, from January 1990 to December 2013, to identify relevant studies. The data from several clinical studies was assimilated to allow appreciation and comparison of the accuracy of each modality. The overall accuracy of each modality was calculated as proportion of outliers > 3% in the coronal plane of both computerised tomography (CT) or magnetic resonance imaging (MRI). RESULTS: Seven clinical studies matched our inclusion criteria for comparison and were included in our study for statistical analysis. Three of these reported series using MRI and four with CT. Overall percentage of outliers > 3% in patients with CT-based PSI systems was 12.5% vs 16.9% for MRI-based systems. These results were not statistically significant. CONCLUSION: Although many studies have been undertaken to determine the ideal pre-operative imaging modality, conclusions remain speculative in the absence of long term data. Ultimately, information regarding accuracy of CT and MRI will be the main determining factor. Increased accuracy of pre-operative imaging could result in longer-term savings, and reduced accumulated dose of radiation by eliminating the need for post-operative imaging and revision surgery. PMID:25793170
Ultrasound guidance system for prostate biopsy
NASA Astrophysics Data System (ADS)
Hummel, Johann; Kerschner, Reinhard; Kaar, Marcus; Birkfellner, Wolfgang; Figl, Michael
2017-03-01
We designed a guidance system for prostate biopsy based on PET/MR images and 3D ultrasound (US). With our proposed method common inter-modal MR-US (or CT-US in case of PET/CTs) registration can be replaced by an intra-modal 3D/3D-US/US registration and an optical tracking system (OTS). On the pre-operative site, a PET/MR calibration allows to link both hybrid modalities with an abdominal 3D-US. On the interventional site, another abdominal 3D US is taken to merge the pre-operative images with the real-time 3D-US via 3D/3D-US/US registration. Finally, the images of a tracked trans-rectal US probe can be displayed with the pre-operative images by overlay. For PET/MR image fusion we applied a point-to-point registration between PET and OTS and MR and OTS, respectively. 3D/3D-US/US registration was evaluated for images taken in supine and lateral patient position. To enable table shifts between PET/MR and US image acquisition a table calibration procedure is presented. We found fiducial registration errors of 0.9 mm and 2.8 mm, respectively, with respect to the MR and PET calibration. A target registration error between MR and 3D US amounted to 1.4 mm. The registration error for the 3D/3D-US/US registration was found to be 3.7 mm. Furthermore, we have shown that ultrasound is applicable in an MR environment.
Kirkwood, Melissa L; Guild, Jeffrey B; Arbique, Gary M; Tsai, Shirling; Modrall, J Gregory; Anderson, Jon A; Rectenwald, John; Timaran, Carlos
2016-11-01
A new proprietary image-processing system known as AlluraClarity, developed by Philips Healthcare (Best, The Netherlands) for radiation-based interventional procedures, claims to lower radiation dose while preserving image quality using noise-reduction algorithms. This study determined whether the surgeon and patient radiation dose during complex endovascular procedures (CEPs) is decreased after the implementation of this new operating system. Radiation dose to operators, procedure type, reference air kerma, kerma area product, and patient body mass index were recorded during CEPs on two Philips Allura FD 20 fluoroscopy systems with and without Clarity. Operator dose during CEPs was measured using optically stimulable, luminescent nanoDot (Landauer Inc, Glenwood, Ill) detectors placed outside the lead apron at the left upper chest position. nanoDots were read using a microStar ii (Landauer Inc) medical dosimetry system. For the CEPs in the Clarity group, the radiation dose to surgeons was also measured by the DoseAware (Philips Healthcare) personal dosimetry system. Side-by-side measurements of DoseAware and nanoDots allowed for cross-calibration between systems. Operator effective dose was determined using a modified Niklason algorithm. To control for patient size and case complexity, the average fluoroscopy dose rate and the dose per radiographic frame were adjusted for body mass index differences and then compared between the groups with and without Clarity by procedure. Additional factors, for example, physician practice patterns, that may have affected operator dose were inferred by comparing the ratio of the operator dose to procedural kerma area product with and without Clarity. A one-sided Wilcoxon rank sum test was used to compare groups for radiation doses, reference air kermas, and operating practices for each procedure type. The analysis included 234 CEPs; 95 performed without Clarity and 139 with Clarity. Practice patterns of operators during procedures with and without Clarity were not significantly different. For all cases, procedure radiation dose to the patient and the primary and assistant operators were significantly decreased in the Clarity group by 60% compared with the non-Clarity group. By procedure type, fluorography dose rates decreased from 44% for fenestrated endovascular repair and up to 70% with lower extremity interventions. Fluoroscopy dose rates also significantly decreased, from about 37% to 47%, depending on procedure type. The AlluraClarity system reduces the patient and primary operator's radiation dose by more than half during CEPs. This feature appears to be an effective tool in lowering the radiation dose while maintaining image quality. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Information theoretic analysis of canny edge detection in visual communication
NASA Astrophysics Data System (ADS)
Jiang, Bo; Rahman, Zia-ur
2011-06-01
In general edge detection evaluation, the edge detectors are examined, analyzed, and compared either visually or with a metric for specific an application. This analysis is usually independent of the characteristics of the image-gathering, transmission and display processes that do impact the quality of the acquired image and thus, the resulting edge image. We propose a new information theoretic analysis of edge detection that unites the different components of the visual communication channel and assesses edge detection algorithms in an integrated manner based on Shannon's information theory. The edge detection algorithm here is considered to achieve high performance only if the information rate from the scene to the edge approaches the maximum possible. Thus, by setting initial conditions of the visual communication system as constant, different edge detection algorithms could be evaluated. This analysis is normally limited to linear shift-invariant filters so in order to examine the Canny edge operator in our proposed system, we need to estimate its "power spectral density" (PSD). Since the Canny operator is non-linear and shift variant, we perform the estimation for a set of different system environment conditions using simulations. In our paper we will first introduce the PSD of the Canny operator for a range of system parameters. Then, using the estimated PSD, we will assess the Canny operator using information theoretic analysis. The information-theoretic metric is also used to compare the performance of the Canny operator with other edge-detection operators. This also provides a simple tool for selecting appropriate edgedetection algorithms based on system parameters, and for adjusting their parameters to maximize information throughput.
Implementation of remote monitoring and managing switches
NASA Astrophysics Data System (ADS)
Leng, Junmin; Fu, Guo
2010-12-01
In order to strengthen the safety performance of the network and provide the big convenience and efficiency for the operator and the manager, the system of remote monitoring and managing switches has been designed and achieved using the advanced network technology and present network resources. The fast speed Internet Protocol Cameras (FS IP Camera) is selected, which has 32-bit RSIC embedded processor and can support a number of protocols. An Optimal image compress algorithm Motion-JPEG is adopted so that high resolution images can be transmitted by narrow network bandwidth. The architecture of the whole monitoring and managing system is designed and implemented according to the current infrastructure of the network and switches. The control and administrative software is projected. The dynamical webpage Java Server Pages (JSP) development platform is utilized in the system. SQL (Structured Query Language) Server database is applied to save and access images information, network messages and users' data. The reliability and security of the system is further strengthened by the access control. The software in the system is made to be cross-platform so that multiple operating systems (UNIX, Linux and Windows operating systems) are supported. The application of the system can greatly reduce manpower cost, and can quickly find and solve problems.
Uranus: a rapid prototyping tool for FPGA embedded computer vision
NASA Astrophysics Data System (ADS)
Rosales-Hernández, Victor; Castillo-Jimenez, Liz; Viveros-Velez, Gilberto; Zuñiga-Grajeda, Virgilio; Treviño Torres, Abel; Arias-Estrada, M.
2007-01-01
The starting point for all successful system development is the simulation. Performing high level simulation of a system can help to identify, insolate and fix design problems. This work presents Uranus, a software tool for simulation and evaluation of image processing algorithms with support to migrate them to an FPGA environment for algorithm acceleration and embedded processes purposes. The tool includes an integrated library of previous coded operators in software and provides the necessary support to read and display image sequences as well as video files. The user can use the previous compiled soft-operators in a high level process chain, and code his own operators. Additional to the prototyping tool, Uranus offers FPGA-based hardware architecture with the same organization as the software prototyping part. The hardware architecture contains a library of FPGA IP cores for image processing that are connected with a PowerPC based system. The Uranus environment is intended for rapid prototyping of machine vision and the migration to FPGA accelerator platform, and it is distributed for academic purposes.
A remote camera operation system using a marker attached cap
NASA Astrophysics Data System (ADS)
Kawai, Hironori; Hama, Hiromitsu
2005-12-01
In this paper, we propose a convenient system to control a remote camera according to the eye-gazing direction of the operator, which is approximately obtained through calculating the face direction by means of image processing. The operator put a marker attached cap on his head, and the system takes an image of the operator from above with only one video camera. Three markers are set up on the cap, and 'three' is the minimum number to calculate the tilt angle of the head. The more markers are used, the robuster system may be made to occlusion, and the wider moving range of the head is tolerated. It is supposed that the markers must not exist on any three dimensional straight line. To compensate the marker's color change due to illumination conditions, the threshold for the marker extraction is adaptively decided using a k-means clustering method. The system was implemented with MATLAB on a personal computer, and the real-time operation was realized. Through the experimental results, robustness of the system was confirmed and tilt and pan angles of the head could be calculated with enough accuracy to use.
Software for Acquiring Image Data for PIV
NASA Technical Reports Server (NTRS)
Wernet, Mark P.; Cheung, H. M.; Kressler, Brian
2003-01-01
PIV Acquisition (PIVACQ) is a computer program for acquisition of data for particle-image velocimetry (PIV). In the PIV system for which PIVACQ was developed, small particles entrained in a flow are illuminated with a sheet of light from a pulsed laser. The illuminated region is monitored by a charge-coupled-device camera that operates in conjunction with a data-acquisition system that includes a frame grabber and a counter-timer board, both installed in a single computer. The camera operates in "frame-straddle" mode where a pair of images can be obtained closely spaced in time (on the order of microseconds). The frame grabber acquires image data from the camera and stores the data in the computer memory. The counter/timer board triggers the camera and synchronizes the pulsing of the laser with acquisition of data from the camera. PIVPROC coordinates all of these functions and provides a graphical user interface, through which the user can control the PIV data-acquisition system. PIVACQ enables the user to acquire a sequence of single-exposure images, display the images, process the images, and then save the images to the computer hard drive. PIVACQ works in conjunction with the PIVPROC program which processes the images of particles into the velocity field in the illuminated plane.
Image recording requirements for earth observation applications in the next decade
NASA Technical Reports Server (NTRS)
Peavey, B.; Sos, J. Y.
1975-01-01
Future requirements for satellite-borne image recording systems are examined from the standpoints of system performance, system operation, product type, and product quality. Emphasis is on total system design while keeping in mind that the image recorder or scanner is the most crucial element which will affect the end product quality more than any other element within the system. Consideration of total system design and implementation for sustained operational usage must encompass the requirements for flexibility of input data and recording speed, pixel density, aspect ratio, and format size. To produce this type of system requires solution of challenging problems in interfacing the data source with the recorder, maintaining synchronization between the data source and the recorder, and maintaining a consistent level of quality. Film products of better quality than is currently achieved in a routine manner are needed. A 0.1 pixel geometric accuracy and 0.0001 d.u. radiometric accuracy on standard (240 mm) size format should be accepted as a goal to be reached in the near future.
Multi-beam range imager for autonomous operations
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Lee, H. Sang; Ramaswami, R.
1993-01-01
For space operations from the Space Station Freedom the real time range imager will be very valuable in terms of refuelling, docking as well as space exploration operations. For these applications as well as many other robotics and remote ranging applications, a small potable, power efficient, robust range imager capable of a few tens of km ranging with 10 cm accuracy is needed. The system developed is based on a well known pseudo-random modulation technique applied to a laser transmitter combined with a novel range resolution enhancement technique. In this technique, the transmitter is modulated by a relatively low frequency of an order of a few MHz to enhance the signal to noise ratio and to ease the stringent systems engineering requirements while accomplishing a very high resolution. The desired resolution cannot easily be attained by other conventional approaches. The engineering model of the system is being designed to obtain better than 10 cm range accuracy simply by implementing a high precision clock circuit. In this paper we present the principle of the pseudo-random noise (PN) lidar system and the results of the proof of experiment.
An augmented magnetic navigation system for Transcatheter Aortic Valve Implantation.
Luo, Zhe; Cai, Junfeng; Nie, Yuanyuan; Wang, Guotai; Gu, Lixu
2013-01-01
This research proposes an augmented magnetic navigation system for Transcatheter Aortic Valve Implantation (TAVI) employing a magnetic tracking system (MTS) combined with a dynamic aortic model and intra-operative ultrasound (US) images. The dynamic 3D aortic model is constructed based on the preoperative 4D computed tomography (CT), which is animated according to the real time electrocardiograph (ECG) input of patient. And a preoperative planning is performed to determine the target position of the aortic valve prosthesis. The temporal alignment is performed to synchronize the ECG signals, intra-operative US image and tracking information. Afterwards, with the assistance of synchronized ECG signals, the contour of aortic root automatic extracted from short axis US image is registered to the dynamic aortic model by a feature based registration intra-operatively. Then the augmented MTS guides the interventionist to confidently position and deploy the aortic valve prosthesis to target. The system was validated by animal studies on three porcine subjects, the deployment and tilting errors of which are 3.17 ± 0.91 mm and 7.40 ± 2.89° respectively.
Trajectory-based morphological operators: a model for efficient image processing.
Jimeno-Morenilla, Antonio; Pujol, Francisco A; Molina-Carmona, Rafael; Sánchez-Romero, José L; Pujol, Mar
2014-01-01
Mathematical morphology has been an area of intensive research over the last few years. Although many remarkable advances have been achieved throughout these years, there is still a great interest in accelerating morphological operations in order for them to be implemented in real-time systems. In this work, we present a new model for computing mathematical morphology operations, the so-called morphological trajectory model (MTM), in which a morphological filter will be divided into a sequence of basic operations. Then, a trajectory-based morphological operation (such as dilation, and erosion) is defined as the set of points resulting from the ordered application of the instant basic operations. The MTM approach allows working with different structuring elements, such as disks, and from the experiments, it can be extracted that our method is independent of the structuring element size and can be easily applied to industrial systems and high-resolution images.
Tsuchihashi, Yasunari; Takamatsu, Terumasa; Hashimoto, Yukimasa; Takashima, Tooru; Nakano, Kooji; Fujita, Setsuya
2008-07-15
We started to use virtual slide (VS) and virtual microscopy (VM) systems for quick frozen intra-operative telepathology diagnosis in Kyoto, Japan. In the system we used a digital slide scanner, VASSALO by CLARO Inc., and a broadband optic fibre provided by NTT West Japan Inc. with the best effort capacity of 100 Mbps. The client is the pathology laboratory of Yamashiro Public Hospital, one of the local centre hospitals located in the south of Kyoto Prefecture, where a full-time pathologist is not present. The client is connected by VPN to the telepathology centre of our institute located in central Kyoto. As a result of the recent 15 test cases of VS telepathology diagnosis, including cases judging negative or positive surgical margins, we could estimate the usefulness of VS in intra-operative remote diagnosis. The time required for the frozen section VS file making was found to be around 10 min when we use x10 objective and if the maximal dimension of the frozen sample is less than 20 mm. Good correct focus of VS images was attained in all cases and all the fields of each tissue specimen. Up to now the capacity of best effort B-band appears to be sufficient to attain diagnosis on time in intra-operation. Telepathology diagnosis was achieved within 5 minutes in most cases using VS viewer provided by CLARO Inc. The VS telepathology system was found to be superior to the conventional still image telepathology system using a robotic microscope since in the former we can observe much greater image information than in the latter in a certain limited time of intra-operation and in the much more efficient ways. In the near future VS telepathology will replace conventional still image telepathology with a robotic microscope even in quick frozen intra-operative diagnosis.
Use of virtual slide system for quick frozen intra-operative telepathology diagnosis in Kyoto, Japan
Tsuchihashi, Yasunari; Takamatsu, Terumasa; Hashimoto, Yukimasa; Takashima, Tooru; Nakano, Kooji; Fujita, Setsuya
2008-01-01
We started to use virtual slide (VS) and virtual microscopy (VM) systems for quick frozen intra-operative telepathology diagnosis in Kyoto, Japan. In the system we used a digital slide scanner, VASSALO by CLARO Inc., and a broadband optic fibre provided by NTT West Japan Inc. with the best effort capacity of 100 Mbps. The client is the pathology laboratory of Yamashiro Public hospital, one of the local centre hospitals located in the south of Kyoto Prefecture, where a fulltime pathologist is not present. The client is connected by VPN to the telepathology centre of our institute located in central Kyoto. As a result of the recent 15 test cases of VS telepathology diagnosis, including cases judging negative or positive surgical margins, we could estimate the usefulness of VS in intra-operative remote diagnosis. The time required for the frozen section VS file making was found to be around 10 min when we use ×10 objective and if the maximal dimension of the frozen sample is less than 20 mm. Good correct focus of VS images was attained in all cases and all the fields of each tissue specimen. Up to now the capacity of best effort B-band appears to be sufficient to attain diagnosis on time in intra-operation. Telepathology diagnosis was achieved within 5 minutes in most cases using VS viewer provided by CLARO Inc. The VS telepathology system was found to be superior to the conventional still image telepathology system using a robotic microscope since in the former we can observe much greater image information than in the latter in a certain limited time of intra-operation and in the much more efficient ways. In the near future VS telepathology will replace conventional still image telepathology with a robotic microscope even in quick frozen intra-operative diagnosis. PMID:18673520
NASA Technical Reports Server (NTRS)
1982-01-01
A project to develop an effective mobility aid for blind pedestrians which acquires consecutive images of the scenes before a moving pedestrian, which locates and identifies the pedestrian's path and potential obstacles in the path, which presents path and obstacle information to the pedestrian, and which operates in real-time is discussed. The mobility aid has three principal components: an image acquisition system, an image interpretation system, and an information presentation system. The image acquisition system consists of a miniature, solid-state TV camera which transforms the scene before the blind pedestrian into an image which can be received by the image interpretation system. The image interpretation system is implemented on a microprocessor which has been programmed to execute real-time feature extraction and scene analysis algorithms for locating and identifying the pedestrian's path and potential obstacles. Identity and location information is presented to the pedestrian by means of tactile coding and machine-generated speech.
Spaceborne synthetic aperture radar signal processing using FPGAs
NASA Astrophysics Data System (ADS)
Sugimoto, Yohei; Ozawa, Satoru; Inaba, Noriyasu
2017-10-01
Synthetic Aperture Radar (SAR) imagery requires image reproduction through successive signal processing of received data before browsing images and extracting information. The received signal data records of the ALOS-2/PALSAR-2 are stored in the onboard mission data storage and transmitted to the ground. In order to compensate the storage usage and the capacity of transmission data through the mission date communication networks, the operation duty of the PALSAR-2 is limited. This balance strongly relies on the network availability. The observation operations of the present spaceborne SAR systems are rigorously planned by simulating the mission data balance, given conflicting user demands. This problem should be solved such that we do not have to compromise the operations and the potential of the next-generation spaceborne SAR systems. One of the solutions is to compress the SAR data through onboard image reproduction and information extraction from the reproduced images. This is also beneficial for fast delivery of information products and event-driven observations by constellation. The Emergence Studio (Sōhatsu kōbō in Japanese) with Japan Aerospace Exploration Agency is developing evaluation models of FPGA-based signal processing system for onboard SAR image reproduction. The model, namely, "Fast L1 Processor (FLIP)" developed in 2016 can reproduce a 10m-resolution single look complex image (Level 1.1) from ALOS/PALSAR raw signal data (Level 1.0). The processing speed of the FLIP at 200 MHz results in twice faster than CPU-based computing at 3.7 GHz. The image processed by the FLIP is no way inferior to the image processed with 32-bit computing in MATLAB.
[The operating room of the future].
Broeders, I A; Niessen, W; van der Werken, C; van Vroonhoven, T J
2000-01-29
Advances in computer technology will revolutionize surgical techniques in the next decade. The operating room (OR) of the future will be connected with a laboratory where clinical specialists and researchers prepare image-guided interventions and explore the possibilities of these techniques. The virtual reality is linked to the actual situation in the OR with the aid of navigation instruments. During complicated operations the images prepared preoperatively will be corrected during the operation on the basis of the information obtained peroperatively. MRI currently offers maximal possibilities for image-guided surgery of soft tissues. Simpler techniques such as fluoroscopy and echography will become increasingly integrated in computer-assisted peroperative navigation. The development of medical robot systems will make possible microsurgical procedures by the endoscopic route. Tele-manipulation systems will also play a part in the training of surgeons. Design and construction of the OR will be adapted to the surgical technology, and include an information and control unit where preoperative and peroperative data come together and from where the surgeon operates the instruments. Concepts for the future OR should be regularly adjusted to allow for new surgical technology.
Development of fast neutron radiography system based on portable neutron generator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yi, Chia Jia, E-mail: gei-i-kani@hotmail.com; Nilsuwankosit, Sunchai, E-mail: sunchai.n@chula.ac.th
Due to the high installation cost, the safety concern and the immobility of the research reactors, the neutron radiography system based on portable neutron generator is proposed. Since the neutrons generated from a portable neutron generator are mostly the fast neutrons, the system is emphasized on using the fast neutrons for the purpose of conducting the radiography. In order to suppress the influence of X-ray produced by the neutron generator, a combination of a shielding material sandwiched between two identical imaging plates is used. A binary XOR operation is then applied for combining the information from the imaging plates. Themore » raw images obtained confirm that the X-ray really has a large effect and that XOR operation can help enhance the effect of the neutrons.« less
Segmentation of 830- and 1310-nm LASIK corneal optical coherence tomography images
NASA Astrophysics Data System (ADS)
Li, Yan; Shekhar, Raj; Huang, David
2002-05-01
Optical coherence tomography (OCT) provides a non-contact and non-invasive means to visualize the corneal anatomy at micron scale resolution. We obtained corneal images from an arc-scanning (converging) OCT system operating at a wavelength of 830nm and a fan-shaped-scanning high-speed OCT system with an operating wavelength of 1310nm. Different scan protocols (arc/fan) and data acquisition rates, as well as wavelength dependent bio-tissue backscatter contrast and optical absorption, make the images acquired using the two systems different. We developed image-processing algorithms to automatically detect the air-tear interface, epithelium-Bowman's layer interface, laser in-situ keratomileusis (LASIK) flap interface, and the cornea-aqueous interface in both kinds of images. The overall segmentation scheme for 830nm and 1310nm OCT images was similar, although different strategies were adopted for specific processing approaches. Ultrasound pachymetry measurements of the corneal thickness and Placido-ring based corneal topography measurements of the corneal curvature were made on the same day as the OCT examination. Anterior/posterior corneal surface curvature measurement with OCT was also investigated. Results showed that automated segmentation of OCT images could evaluate anatomic outcome of LASIK surgery.
Automatic retinal interest evaluation system (ARIES).
Yin, Fengshou; Wong, Damon Wing Kee; Yow, Ai Ping; Lee, Beng Hai; Quan, Ying; Zhang, Zhuo; Gopalakrishnan, Kavitha; Li, Ruoying; Liu, Jiang
2014-01-01
In recent years, there has been increasing interest in the use of automatic computer-based systems for the detection of eye diseases such as glaucoma, age-related macular degeneration and diabetic retinopathy. However, in practice, retinal image quality is a big concern as automatic systems without consideration of degraded image quality will likely generate unreliable results. In this paper, an automatic retinal image quality assessment system (ARIES) is introduced to assess both image quality of the whole image and focal regions of interest. ARIES achieves 99.54% accuracy in distinguishing fundus images from other types of images through a retinal image identification step in a dataset of 35342 images. The system employs high level image quality measures (HIQM) to perform image quality assessment, and achieves areas under curve (AUCs) of 0.958 and 0.987 for whole image and optic disk region respectively in a testing dataset of 370 images. ARIES acts as a form of automatic quality control which ensures good quality images are used for processing, and can also be used to alert operators of poor quality images at the time of acquisition.
A novel ultrasound-guided shoulder arthroscopic surgery
NASA Astrophysics Data System (ADS)
Tyryshkin, K.; Mousavi, P.; Beek, M.; Chen, T.; Pichora, D.; Abolmaesumi, P.
2006-03-01
This paper presents a novel ultrasound-guided computer system for arthroscopic surgery of the shoulder joint. Intraoperatively, the system tracks and displays the surgical instruments, such as arthroscope and arthroscopic burrs, relative to the anatomy of the patient. The purpose of this system is to improve the surgeon's perception of the three-dimensional space within the anatomy of the patient in which the instruments are manipulated and to provide guidance towards the targeted anatomy. Pre-operatively, computed tomography images of the patient are acquired to construct virtual threedimensional surface models of the shoulder bone structure. Intra-operatively, live ultrasound images of pre-selected regions of the shoulder are captured using an ultrasound probe whose three-dimensional position is tracked by an optical camera. These images are used to register the surface model to the anatomy of the patient in the operating room. An initial alignment is obtained by matching at least three points manually selected on the model to their corresponding points identified on the ultrasound images. The registration is then improved with an iterative closest point or a sequential least squares estimation technique. In the present study the registration results of these techniques are compared. After the registration, surgical instruments are displayed relative to the surface model of the patient on a graphical screen visible to the surgeon. Results of laboratory experiments on a shoulder phantom indicate acceptable registration results and sufficiently fast overall system performance to be applicable in the operating room.
In-vessel visible inspection system on KSTAR
NASA Astrophysics Data System (ADS)
Chung, Jinil; Seo, D. C.
2008-08-01
To monitor the global formation of the initial plasma and damage to the internal structures of the vacuum vessel, an in-vessel visible inspection system has been installed and operated on the Korean superconducting tokamak advanced research (KSTAR) device. It consists of four inspection illuminators and two visible/H-alpha TV cameras. Each illuminator uses four 150W metal-halide lamps with separate lamp controllers, and programmable progressive scan charge-coupled device cameras with 1004×1004 resolution at 48frames/s and a resolution of 640×480 at 210frames/s are used to capture images. In order to provide vessel inspection capability under any operation condition, the lamps and cameras are fully controlled from the main control room and protected by shutters from deposits during plasma operation. In this paper, we describe the design and operation results of the visible inspection system with the images of the KSTAR Ohmic discharges during the first plasma campaign.
Image-based systems for space surveillance: from images to collision avoidance
NASA Astrophysics Data System (ADS)
Pyanet, Marine; Martin, Bernard; Fau, Nicolas; Vial, Sophie; Chalte, Chantal; Beraud, Pascal; Fuss, Philippe; Le Goff, Roland
2011-11-01
In many spatial systems, image is a core technology to fulfil the mission requirements. Depending on the application, the needs and the constraints are different and imaging systems can offer a large variety of configurations in terms of wavelength, resolution, field-of-view, focal length or sensitivity. Adequate image processing algorithms allow the extraction of the needed information and the interpretation of images. As a prime contractor for many major civil or military projects, Astrium ST is very involved in the proposition, development and realization of new image-based techniques and systems for space-related purposes. Among the different applications, space surveillance is a major stake for the future of space transportation. Indeed, studies show that the number of debris in orbit is exponentially growing and the already existing population of small and medium debris is a concrete threat to operational satellites. This paper presents Astrium ST activities regarding space surveillance for space situational awareness (SSA) and space traffic management (STM). Among other possible SSA architectures, the relevance of a ground-based optical station network is investigated. The objective is to detect and track space debris and maintain an exhaustive and accurate catalogue up-to-date in order to assess collision risk for satellites and space vehicles. The system is composed of different type of optical stations dedicated to specific functions (survey, passive tracking, active tracking), distributed around the globe. To support these investigations, two in-house operational breadboards were implemented and are operated for survey and tracking purposes. This paper focuses on Astrium ST end-to-end optical-based survey concept. For the detection of new debris, a network of wide field of view survey stations is considered: those stations are able to detect small objects and associated image processing (detection and tracking) allow a preliminary restitution of their orbit.
Segmentation of financial seals and its implementation on a DSP-based system
NASA Astrophysics Data System (ADS)
He, Jin; Liu, Tiegen; Guo, Jingjing; Zhang, Hao
2009-11-01
Automatic seal imprint identification is an important part of modern financial security. Accurate segmentation is the basis of correct identification. In this paper, a DSP (digital signal processor) based identification system was designed, and an adaptive algorithm was proposed to extract binary seal images from financial instruments. As the kernel of the identification system, a DSP chip of TMS320DM642 was used to implement image processing, controlling and coordinating works of each system module. The proposed algorithm consisted of three stages, including extraction of grayscale seal image, denoising and binarization. A grayscale seal image was extracted by color transform from a financial instrument image. Adaptive morphological operations were used to highlight details of the extracted grayscale seal image and smooth the background. After median filter for noise elimination, the filtered seal image was binarized by Otsu's method. The algorithm was developed based on the DSP development environment CCS and real-time operation system DSP/BIOS. To simplify the implementation of the proposed algorithm, the calibration of white balance and the coarse positioning of the seal imprint were implemented by TMS320DM642 controlling image acquisition. IMGLIB of TMS320DM642 was used for the efficiency improvement. The experiment result showed that financial seal imprints, even with intricate and dense strokes can be correctly segmented by the proposed algorithm. Adhesion and incompleteness distortions in the segmentation results were reduced, even when the original seal imprint had a poor quality.
Chromotomosynthesis for high speed hyperspectral imagery
NASA Astrophysics Data System (ADS)
Bostick, Randall L.; Perram, Glen P.
2012-09-01
A rotating direct vision prism, chromotomosynthetic imaging (CTI) system operating in the visible creates hyperspectral imagery by collecting a set of 2D images with each spectrally projected at a different rotation angle of the prism. Mathematical reconstruction techniques that have been well tested in the field of medical physics are used to reconstruct the data to produce the 3D hyperspectral image. The instrument operates with a 100 mm focusing lens in the spectral range of 400-900 nm with a field of view of 71.6 mrad and angular resolution of 0.8-1.6 μrad. The spectral resolution is 0.6 nm at the shortest wavelengths, degrading to over 10 nm at the longest wavelengths. Measurements using a pointlike target show that performance is limited by chromatic aberration. The accuracy and utility of the instrument is assessed by comparing the CTI results to spatial data collected by a wideband image and hyperspectral data collected using a liquid crystal tunable filter (LCTF). The wide-band spatial content of the scene reconstructed from the CTI data is of same or better quality as a single frame collected by the undispersed imaging system with projections taken at every 1°. Performance is dependent on the number of projections used, with projections at 5° producing adequate results in terms of target characterization. The data collected by the CTI system can provide spatial information of equal quality as a comparable imaging system, provide high-frame rate slitless 1-D spectra, and generate 3-D hyperspectral imagery which can be exploited to provide the same results as a traditional multi-band spectral imaging system. While this prototype does not operate at high speeds, components exist which will allow for CTI systems to generate hyperspectral video imagery at rates greater than 100 Hz. The instrument has considerable potential for characterizing bomb detonations, muzzle flashes, and other battlefield combustion events.
NASA Astrophysics Data System (ADS)
Brändström; Gustavsson, Björn; Pellinen-Wannberg, Asta; Sandahl, Ingrid; Sergienko, Tima; Steen, Ake
2005-08-01
The Auroral Large Imaging System (ALIS) was first proposed at the ESA-PAC meeting in Lahnstein 1989. The first spectroscopic imaging station was operational in 1994, and since then up to six stations have been in simultaneous operation. Each station has a scientific-grade CCD-detector and a filter-wheel for narrow-band interference-filters with six positions. The field-of-view is around 70°. Each imager is mounted in a positioning system, enabling imaging of a common volume from several sites. This enables triangulation and tomography. Raw data from ALIS is freely available at ("http://alis.irf.se") and ALIS is open for scientific colaboration. ALIS made the first unambiguous observations of Radio-induced optical emissions at high latitudes, and the detection of water in a Leonid meteor-trail. Both rockets and satellite coordination are considered for future observations with ALIS.
HICO and RAIDS Experiment Payload - Hyperspectral Imager for the Coastal Ocean
NASA Technical Reports Server (NTRS)
Corson, Mike
2009-01-01
HICO and RAIDS Experiment Payload - Hyperspectral Imager For The Coastal Ocean (HREP-HICO) will operate a visible and near-infrared (VNIR) Maritime Hyperspectral Imaging (MHSI) system, to detect, identify and quantify coastal geophysical features from the International Space Station.
Remote consultation and diagnosis in medical imaging using a global PACS backbone network
NASA Astrophysics Data System (ADS)
Martinez, Ralph; Sutaria, Bijal N.; Kim, Jinman; Nam, Jiseung
1993-10-01
A Global PACS is a national network which interconnects several PACS networks at medical and hospital complexes using a national backbone network. A Global PACS environment enables new and beneficial operations between radiologists and physicians, when they are located in different geographical locations. One operation allows the radiologist to view the same image folder at both Local and Remote sites so that a diagnosis can be performed. The paper describes the user interface, database management, and network communication software which has been developed in the Computer Engineering Research Laboratory and Radiology Research Laboratory. Specifically, a design for a file management system in a distributed environment is presented. In the remote consultation and diagnosis operation, a set of images is requested from the database archive system and sent to the Local and Remote workstation sites on the Global PACS network. Viewing the same images, the radiologists use pointing overlay commands, or frames to point out features on the images. Each workstation transfers these frames, to the other workstation, so that an interactive session for diagnosis takes place. In this phase, we use fixed frames and variable size frames, used to outline an object. The data pockets for these frames traverses the national backbone in real-time. We accomplish this feature by using TCP/IP protocol sockets for communications. The remote consultation and diagnosis operation has been tested in real-time between the University Medical Center and the Bowman Gray School of Medicine at Wake Forest University, over the Internet. In this paper, we show the feasibility of the operation in a Global PACS environment. Future improvements to the system will include real-time voice and interactive compressed video scenarios.
Performance updates of HAWK-I and preparation for the commissioning of the system GRAAL+HAWK-I
NASA Astrophysics Data System (ADS)
Hibon, Pascale; Paufique, Jerome; Kuntschner, Harald; Dobrzycka, Danuta; Le Louarn, Miska; Valenti, Elena; Neeser, Mark; Pompei, Emanuela; Arsenault, Robin; Siebenmorgen, Ralf; Madec, Pierre-Yves; Petr-Gotzens, Monika; La Fuente, Carlos; Urrutia, Josefina; Valenzuela, Javier; Castillo, Roberto; Baksai, Pedro; Garcia Dabo, Cesar Enrique; Jost, Andreas; Argomedo, Javier; Kolb, Johann; Kiekebusch, Mario; Hubin, Norbert; Duhoux, Philippe; Conzelmann, Ralf Dieter; Donaldson, Robert; Tordo, Sebastien; Huber, Stefan
2016-08-01
The High Acuity Wide field K-band Imager (HAWK-I) instrument is a cryogenic wide field imager operating in the wavelength range 0.9 to 2.5 microns. It has been in operations since 2007 on the UT4 at the Very Large Telescope Observatory in seeing-limited mode. In 2017-2018, GRound Layer Adaptive optics Assisted by Lasers module (GRAAL) will be in operation and the system GRAAL+HAWK-I will be commissioned. It will allow: deeper exposures for nearly point-source objects, or shorter exposure times for reaching the same magnitude, and/or deeper detection limiting magnitude. With GRAAL, HAWK-I will operate more than 80% of the time with an equivalent K-band seeing of 0.55" (instead of 0.7" without GRAAL). GRAAL is already installed and the operations without adaptive optics were commissioned in 2015. We discuss here the latest updates on performance from HAWK-I without Adaptive Optics (AO) and the preparation for the commissioning of the system GRAAL+HAWK-I.
Strickland, Matt; Tremaine, Jamie; Brigley, Greg; Law, Calvin
2013-06-01
As surgical procedures become increasingly dependent on equipment and imaging, the need for sterile members of the surgical team to have unimpeded access to the nonsterile technology in their operating room (OR) is of growing importance. To our knowledge, our team is the first to use an inexpensive infrared depthsensing camera (a component of the Microsoft Kinect) and software developed inhouse to give surgeons a touchless, gestural interface with which to navigate their picture archiving and communication systems intraoperatively. The system was designed and developed with feedback from surgeons and OR personnel and with consideration of the principles of aseptic technique and gestural controls in mind. Simulation was used for basic validation before trialing in a pilot series of 6 hepatobiliary-pancreatic surgeries. The interface was used extensively in 2 laparoscopic and 4 open procedures. Surgeons primarily used the system for anatomic correlation, real-time comparison of intraoperative ultrasound with preoperative computed tomography and magnetic resonance imaging scans and for teaching residents and fellows. The system worked well in a wide range of lighting conditions and procedures. It led to a perceived increase in the use of intraoperative image consultation. Further research should be focused on investigating the usefulness of touchless gestural interfaces in different types of surgical procedures and its effects on operative time.
Multispectral image-fused head-tracked vision system (HTVS) for driving applications
NASA Astrophysics Data System (ADS)
Reese, Colin E.; Bender, Edward J.
2001-08-01
Current military thermal driver vision systems consist of a single Long Wave Infrared (LWIR) sensor mounted on a manually operated gimbal, which is normally locked forward during driving. The sensor video imagery is presented on a large area flat panel display for direct view. The Night Vision and Electronics Sensors Directorate and Kaiser Electronics are cooperatively working to develop a driver's Head Tracked Vision System (HTVS) which directs dual waveband sensors in a more natural head-slewed imaging mode. The HTVS consists of LWIR and image intensified sensors, a high-speed gimbal, a head mounted display, and a head tracker. The first prototype systems have been delivered and have undergone preliminary field trials to characterize the operational benefits of a head tracked sensor system for tactical military ground applications. This investigation will address the advantages of head tracked vs. fixed sensor systems regarding peripheral sightings of threats, road hazards, and nearby vehicles. An additional thrust will investigate the degree to which additive (A+B) fusion of LWIR and image intensified sensors enhances overall driving performance. Typically, LWIR sensors are better for detecting threats, while image intensified sensors provide more natural scene cues, such as shadows and texture. This investigation will examine the degree to which the fusion of these two sensors enhances the driver's overall situational awareness.
HALO: a reconfigurable image enhancement and multisensor fusion system
NASA Astrophysics Data System (ADS)
Wu, F.; Hickman, D. L.; Parker, Steve J.
2014-06-01
Contemporary high definition (HD) cameras and affordable infrared (IR) imagers are set to dramatically improve the effectiveness of security, surveillance and military vision systems. However, the quality of imagery is often compromised by camera shake, or poor scene visibility due to inadequate illumination or bad atmospheric conditions. A versatile vision processing system called HALO™ is presented that can address these issues, by providing flexible image processing functionality on a low size, weight and power (SWaP) platform. Example processing functions include video distortion correction, stabilisation, multi-sensor fusion and image contrast enhancement (ICE). The system is based around an all-programmable system-on-a-chip (SoC), which combines the computational power of a field-programmable gate array (FPGA) with the flexibility of a CPU. The FPGA accelerates computationally intensive real-time processes, whereas the CPU provides management and decision making functions that can automatically reconfigure the platform based on user input and scene content. These capabilities enable a HALO™ equipped reconnaissance or surveillance system to operate in poor visibility, providing potentially critical operational advantages in visually complex and challenging usage scenarios. The choice of an FPGA based SoC is discussed, and the HALO™ architecture and its implementation are described. The capabilities of image distortion correction, stabilisation, fusion and ICE are illustrated using laboratory and trials data.
Development of Targeting UAVs Using Electric Helicopters and Yamaha RMAX
2007-05-17
including the QNX real - time operating system . The video overlay board is useful to display the onboard camera’s image with important information such as... real - time operating system . Fully utilizing the built-in multi-processing architecture with inter-process synchronization and communication
CATAVIÑA: new infrared camera for OAN-SPM
NASA Astrophysics Data System (ADS)
Iriarte, Arturo; Cruz-González, Irene; Martínez, Luis A.; Tinoco, Silvio; Lara, Gerardo; Ruiz, Elfego; Sohn, Erika; Bernal, Abel; Angeles, Fernando; Moreno, Arturo; Murillo, Francisco; Langarica, Rosalía; Luna, Esteban; Salas, Luis; Cajero, Vicente
2006-06-01
CATAVIÑA is a near-infrared camera system to be operated in conjunction with the existing multi-purpose nearinfrared optical bench "CAMALEON" in OAN-SPM. Observing modes include direct imaging, spectroscopy, Fabry- Perot interferometry and polarimetry. This contribution focuses on the optomechanics and detector controller description of CATAVIÑA, which is planned to start operating later in 2006. The camera consists of an 8 inch LN2 dewar containing a 10 filter carousel, a radiation baffle and the detector circuit board mount. The system is based on a Rockwell 1024x1024 HgCdTe (HAWAII-I) FPA, operating in the 1 to 2.5 micron window. The detector controller/readout system was designed and developed at UNAM Instituto de Astronomia. It is based on five Texas Instruments DSK digital signal processor (DSP) modules. One module generates the detector and ADC-system control, while the remaining four are in charge of the acquisition of each of the detector's quadrants. Each DSP has a built-in expanded memory module in order to store more than one image. The detector read-out and signal driver subsystems are mounted onto the dewar in a "back-pack" fashion, each containing four independent pre-amplifiers, converters and signal drivers, that communicate through fiber optics with their respective DSPs. This system has the possibility of programming the offset input voltage and converter gain. The controller software architecture is based on a client/server model. The client sends commands through the TCP/IP protocol and acquires the image. The server consists of a microcomputer with an embedded Linux operating system, which runs the main program that receives the user commands and interacts with the timing and acquisition DSPs. The observer's interface allows for several readout and image processing modes.
3-D video techniques in endoscopic surgery.
Becker, H; Melzer, A; Schurr, M O; Buess, G
1993-02-01
Three-dimensional visualisation of the operative field is an important requisite for precise and fast handling of open surgical operations. Up to now it has only been possible to display a two-dimensional image on the monitor during endoscopic procedures. The increasing complexity of minimal invasive interventions requires endoscopic suturing and ligatures of larger vessels which are difficult to perform without the impression of space. Three-dimensional vision therefore may decrease the operative risk, accelerate interventions and widen the operative spectrum. In April 1992 a 3-D video system developed at the Nuclear Research Center Karlsruhe, Germany (IAI Institute) was applied in various animal experimental procedures and clinically in laparoscopic cholecystectomy. The system works with a single monitor and active high-speed shutter glasses. Our first trials with this new 3-D imaging system clearly showed a facilitation of complex surgical manoeuvres like mobilisation of organs, preparation in the deep space and suture techniques. The 3-D-system introduced in this article will enter the market in 1993 (Opticon Co., Karlsruhe, Germany.
Design of a new type synchronous focusing mechanism
NASA Astrophysics Data System (ADS)
Zhang, Jintao; Tan, Ruijun; Chen, Zhou; Zhang, Yongqi; Fu, Panlong; Qu, Yachen
2018-05-01
Aiming at the dual channel telescopic imaging system composed of infrared imaging system, low-light-level imaging system and image fusion module, In the fusion of low-light-level images and infrared images, it is obvious that using clear source images is easier to obtain high definition fused images. When the target is imaged at 15m to infinity, focusing is needed to ensure the imaging quality of the dual channel imaging system; therefore, a new type of synchronous focusing mechanism is designed. The synchronous focusing mechanism realizes the focusing function through the synchronous translational imaging devices, mainly including the structure of the screw rod nut, the shaft hole coordination structure and the spring steel ball eliminating clearance structure, etc. Starting from the synchronous focusing function of two imaging devices, the structure characteristics of the synchronous focusing mechanism are introduced in detail, and the focusing range is analyzed. The experimental results show that the synchronous focusing mechanism has the advantages of ingenious design, high focusing accuracy and stable and reliable operation.
A panoramic imaging system based on fish-eye lens
NASA Astrophysics Data System (ADS)
Wang, Ye; Hao, Chenyang
2017-10-01
Panoramic imaging has been closely watched as one of the major technologies of AR and VR. Mainstream panoramic imaging techniques lenses include fish-eye lenses, image splicing, and catadioptric imaging system. Meanwhile, fish-eyes are widely used in the big picture video surveillance. The advantage of fish-eye lenses is that they are easy to operate and cost less, but how to solve the image distortion of fish-eye lenses has always been a very important topic. In this paper, the image calibration algorithm of fish-eye lens is studied by comparing the method of interpolation, bilinear interpolation and double three interpolation, which are used to optimize the images.
Computational imaging with a single-pixel detector and a consumer video projector
NASA Astrophysics Data System (ADS)
Sych, D.; Aksenov, M.
2018-02-01
Single-pixel imaging is a novel rapidly developing imaging technique that employs spatially structured illumination and a single-pixel detector. In this work, we experimentally demonstrate a fully operating modular single-pixel imaging system. Light patterns in our setup are created with help of a computer-controlled digital micromirror device from a consumer video projector. We investigate how different working modes and settings of the projector affect the quality of reconstructed images. We develop several image reconstruction algorithms and compare their performance for real imaging. Also, we discuss the potential use of the single-pixel imaging system for quantum applications.
Electronic magnification for astronomical camera tubes
NASA Technical Reports Server (NTRS)
Vine, J.; Hansen, J. R.; Pietrzyk, J. P.
1974-01-01
Definitions, test schemes, and analyses used to provide variable magnification in the image section of the television sensor for large space telescopes are outlined. Experimental results show a definite form of magnetic field distribution is necessary to achieve magnification in the range 3X to 4X. Coil systems to establish the required field shapes were built, and both image intensifiers and camera tubes were operated at high magnification. The experiments confirm that such operation is practical and can provide satisfactory image quality. The main problem with such a system was identified as heating of the photocathode due to concentration of coil power dissipation in that vicinity. Suggestions for overcoming this disadvantage are included.
NASA Astrophysics Data System (ADS)
Samara, M.; Michell, R. G.; Hampton, D. L.; Trondsen, T.
2012-12-01
The Multi-Spectral Observatory Of Sensitive EMCCDs (MOOSE) consists of 5 imaging systems and is the result of an NSF-funded Major Research Instrumentation project. The main objective of MOOSE is to provide a resource to all members of the scientific community that have interests in imaging low-light-level phenomena, such as aurora, airglow, and meteors. Each imager consists of an Andor DU-888 Electron Multiplying CCD (EMCCD), combined with a telecentric optics section, made by Keo Scientific Ltd., with a selection of available angular fields of view. During the northern hemisphere winter the system is typically based and operated at Poker Flat Research Range in Alaska, but any or all imagers can be shipped anywhere in individual stand-alone cases. We will discuss the main components of the MOOSE project, including the imagers, optics, lenses and filters, as well as the Linux-based control software that enables remote operation. We will also discuss the calibration of the imagers along with the initial deployments and testing done. We are requesting community input regarding operational modes, such as filter and field of view combinations, frame rates, and potentially moving some imagers to other locations, either for tomography or for larger spatial coverage. In addition, given the large volume of auroral image data already available, we are encouraging collaborations for which we will freely distribute the data and any analysis tools already developed. Most significantly, initial science highlights relating to aurora, airglow and meteors will be discussed in the context of the creative and innovative ways that the MOOSE observatory can be used in order to address a new realm of science topics, previously unachievable with traditional single imager systems.
A design of real time image capturing and processing system using Texas Instrument's processor
NASA Astrophysics Data System (ADS)
Wee, Toon-Joo; Chaisorn, Lekha; Rahardja, Susanto; Gan, Woon-Seng
2007-09-01
In this work, we developed and implemented an image capturing and processing system that equipped with capability of capturing images from an input video in real time. The input video can be a video from a PC, video camcorder or DVD player. We developed two modes of operation in the system. In the first mode, an input image from the PC is processed on the processing board (development platform with a digital signal processor) and is displayed on the PC. In the second mode, current captured image from the video camcorder (or from DVD player) is processed on the board but is displayed on the LCD monitor. The major difference between our system and other existing conventional systems is that image-processing functions are performed on the board instead of the PC (so that the functions can be used for further developments on the board). The user can control the operations of the board through the Graphic User Interface (GUI) provided on the PC. In order to have a smooth image data transfer between the PC and the board, we employed Real Time Data Transfer (RTDX TM) technology to create a link between them. For image processing functions, we developed three main groups of function: (1) Point Processing; (2) Filtering and; (3) 'Others'. Point Processing includes rotation, negation and mirroring. Filter category provides median, adaptive, smooth and sharpen filtering in the time domain. In 'Others' category, auto-contrast adjustment, edge detection, segmentation and sepia color are provided, these functions either add effect on the image or enhance the image. We have developed and implemented our system using C/C# programming language on TMS320DM642 (or DM642) board from Texas Instruments (TI). The system was showcased in College of Engineering (CoE) exhibition 2006 at Nanyang Technological University (NTU) and have more than 40 users tried our system. It is demonstrated that our system is adequate for real time image capturing. Our system can be used or applied for applications such as medical imaging, video surveillance, etc.
The near infrared imaging system for the real-time protection of the JET ITER-like wall
NASA Astrophysics Data System (ADS)
Huber, A.; Kinna, D.; Huber, V.; Arnoux, G.; Balboa, I.; Balorin, C.; Carman, P.; Carvalho, P.; Collins, S.; Conway, N.; McCullen, P.; Jachmich, S.; Jouve, M.; Linsmeier, Ch; Lomanowski, B.; Lomas, P. J.; Lowry, C. G.; Maggi, C. F.; Matthews, G. F.; May-Smith, T.; Meigs, A.; Mertens, Ph; Nunes, I.; Price, M.; Puglia, P.; Riccardo, V.; Rimini, F. G.; Sergienko, G.; Tsalas, M.; Zastrow, K.-D.; contributors, JET
2017-12-01
This paper describes the design, implementation and operation of the near infrared (NIR) imaging diagnostic system of the JET ITER-like wall (JET-ILW) plasma experiment and its integration into the existing JET protection architecture. The imaging system comprises four wide-angle views, four tangential divertor views, and two top views of the divertor covering 66% of the first wall and up to 43% of the divertor. The operation temperature ranges which must be observed by the NIR protection cameras are, for the materials used on JET: Be 700 °C-1400 °C W coating 700 °C-1370 °C W bulk 700 °C-1400 °C. The Real-Time Protection system operates routinely since 2011 and successfully demonstrated its capability to avoid the overheating of the main chamber beryllium wall as well as of the divertor W and W-coated carbon fibre composite (CFC) tiles. During this period, less than 0.5% of the terminated discharges were aborted by a malfunction of the system. About 2%-3% of the discharges were terminated due to the detection of actual hot spots.
EOID Evaluation and Automated Target Recognition
2002-09-30
Electro - Optic IDentification (EOID) sensors into shallow water littoral zone minehunting systems on towed, remotely operated, and autonomous platforms. These downlooking laser-based sensors operate at unparalleled standoff ranges in visible wavelengths to image and identify mine-like objects (MLOs) that have been detected through other sensing means such as magnetic induction and various modes of acoustic imaging. Our long term goal is to provide a robust automated target cueing and identification capability for use with these imaging sensors. It is also our goal to assist
EOID Evaluation and Automated Target Recognition
2001-09-30
Electro - Optic IDentification (EOID) sensors into shallow water littoral zone minehunting systems on towed, remotely operated, and autonomous platforms. These downlooking laser-based sensors operate at unparalleled standoff ranges in visible wavelengths to image and identify mine-like objects that have been detected through other sensing means such as magnetic induction and various modes of acoustic imaging. Our long term goal is to provide a robust automated target cueing and identification capability for use with these imaging sensors. It is also our goal to assist the
SAR System for UAV Operation with Motion Error Compensation beyond the Resolution Cell
González-Partida, José-Tomás; Almorox-González, Pablo; Burgos-García, Mateo; Dorta-Naranjo, Blas-Pablo
2008-01-01
This paper presents an experimental Synthetic Aperture Radar (SAR) system that is under development in the Universidad Politécnica de Madrid. The system uses Linear Frequency Modulated Continuous Wave (LFM-CW) radar with a two antenna configuration for transmission and reception. The radar operates in the millimeter-wave band with a maximum transmitted bandwidth of 2 GHz. The proposed system is being developed for Unmanned Aerial Vehicle (UAV) operation. Motion errors in UAV operation can be critical. Therefore, this paper proposes a method for focusing SAR images with movement errors larger than the resolution cell. Typically, this problem is solved using two processing steps: first, coarse motion compensation based on the information provided by an Inertial Measuring Unit (IMU); and second, fine motion compensation for the residual errors within the resolution cell based on the received raw data. The proposed technique tries to focus the image without using data of an IMU. The method is based on a combination of the well known Phase Gradient Autofocus (PGA) for SAR imagery and typical algorithms for translational motion compensation on Inverse SAR (ISAR). This paper shows the first real experiments for obtaining high resolution SAR images using a car as a mobile platform for our radar. PMID:27879884
SAR System for UAV Operation with Motion Error Compensation beyond the Resolution Cell.
González-Partida, José-Tomás; Almorox-González, Pablo; Burgos-Garcia, Mateo; Dorta-Naranjo, Blas-Pablo
2008-05-23
This paper presents an experimental Synthetic Aperture Radar (SAR) system that is under development in the Universidad Politécnica de Madrid. The system uses Linear Frequency Modulated Continuous Wave (LFM-CW) radar with a two antenna configuration for transmission and reception. The radar operates in the millimeter-wave band with a maximum transmitted bandwidth of 2 GHz. The proposed system is being developed for Unmanned Aerial Vehicle (UAV) operation. Motion errors in UAV operation can be critical. Therefore, this paper proposes a method for focusing SAR images with movement errors larger than the resolution cell. Typically, this problem is solved using two processing steps: first, coarse motion compensation based on the information provided by an Inertial Measuring Unit (IMU); and second, fine motion compensation for the residual errors within the resolution cell based on the received raw data. The proposed technique tries to focus the image without using data of an IMU. The method is based on a combination of the well known Phase Gradient Autofocus (PGA) for SAR imagery and typical algorithms for translational motion compensation on Inverse SAR (ISAR). This paper shows the first real experiments for obtaining high resolution SAR images using a car as a mobile platform for our radar.
Low-cost, high-speed back-end processing system for high-frequency ultrasound B-mode imaging.
Chang, Jin Ho; Sun, Lei; Yen, Jesse T; Shung, K Kirk
2009-07-01
For real-time visualization of the mouse heart (6 to 13 beats per second), a back-end processing system involving high-speed signal processing functions to form and display images has been developed. This back-end system was designed with new signal processing algorithms to achieve a frame rate of more than 400 images per second. These algorithms were implemented in a simple and cost-effective manner with a single field-programmable gate array (FPGA) and software programs written in C++. The operating speed of the back-end system was investigated by recording the time required for transferring an image to a personal computer. Experimental results showed that the back-end system is capable of producing 433 images per second. To evaluate the imaging performance of the back-end system, a complete imaging system was built. This imaging system, which consisted of a recently reported high-speed mechanical sector scanner assembled with the back-end system, was tested by imaging a wire phantom, a pig eye (in vitro), and a mouse heart (in vivo). It was shown that this system is capable of providing high spatial resolution images with fast temporal resolution.
Low-Cost, High-Speed Back-End Processing System for High-Frequency Ultrasound B-Mode Imaging
Chang, Jin Ho; Sun, Lei; Yen, Jesse T.; Shung, K. Kirk
2009-01-01
For real-time visualization of the mouse heart (6 to 13 beats per second), a back-end processing system involving high-speed signal processing functions to form and display images has been developed. This back-end system was designed with new signal processing algorithms to achieve a frame rate of more than 400 images per second. These algorithms were implemented in a simple and cost-effective manner with a single field-programmable gate array (FPGA) and software programs written in C++. The operating speed of the back-end system was investigated by recording the time required for transferring an image to a personal computer. Experimental results showed that the back-end system is capable of producing 433 images per second. To evaluate the imaging performance of the back-end system, a complete imaging system was built. This imaging system, which consisted of a recently reported high-speed mechanical sector scanner assembled with the back-end system, was tested by imaging a wire phantom, a pig eye (in vitro), and a mouse heart (in vivo). It was shown that this system is capable of providing high spatial resolution images with fast temporal resolution. PMID:19574160
Convolution Operation of Optical Information via Quantum Storage
NASA Astrophysics Data System (ADS)
Li, Zhixiang; Liu, Jianji; Fan, Hongming; Zhang, Guoquan
2017-06-01
We proposed a novel method to achieve optical convolution of two input images via quantum storage based on electromagnetically induced transparency (EIT) effect. By placing an EIT media in the confocal Fourier plane of the 4f-imaging system, the optical convolution of the two input images can be achieved in the image plane.
Study on ice cloud optical thickness retrieval with MODIS IR spectral bands
NASA Astrophysics Data System (ADS)
Zhang, Hong; Li, Jun
2005-01-01
The operational Moderate-Resolution Imaging Spectroradiometer (MODIS) products for cloud properties such as cloud-top pressure (CTP), effective cloud amount (ECA), cloud particle size (CPS), cloud optical thickness (COT), and cloud phase (CP) have been available for users globally. An approach to retrieve COT is investigated using MODIS infrared (IR) window spectral bands (8.5 mm, 11mm, and 12 mm). The COT retrieval from MODIS IR bands has the potential to provide microphysical properties with high spatial resolution during night. The results are compared with those from operational MODIS products derived from the visible (VIS) and near-infrared (NIR) bands during day. Sensitivity of COT to MODIS spectral brightness temperature (BT) and BT difference (BTD) values is studied. A look-up table is created from the cloudy radiative transfer model accounting for the cloud absorption and scattering for the cloud microphysical property retrieval. The potential applications and limitations are also discussed. This algorithm can be applied to the future imager systems such as Visible/Infrared Imager/Radiometer Suite (VIIRS) on the National Polar-orbiting Operational Environmental Satellite System (NPOESS) and Advanced Baseline Imager (ABI) on the Geostationary Operational Environmental Satellite (GOES)-R.
Modeling a color-rendering operator for high dynamic range images using a cone-response function
NASA Astrophysics Data System (ADS)
Choi, Ho-Hyoung; Kim, Gi-Seok; Yun, Byoung-Ju
2015-09-01
Tone-mapping operators are the typical algorithms designed to produce visibility and the overall impression of brightness, contrast, and color of high dynamic range (HDR) images on low dynamic range (LDR) display devices. Although several new tone-mapping operators have been proposed in recent years, the results of these operators have not matched those of the psychophysical experiments based on the human visual system. A color-rendering model that is a combination of tone-mapping and cone-response functions using an XYZ tristimulus color space is presented. In the proposed method, the tone-mapping operator produces visibility and the overall impression of brightness, contrast, and color in HDR images when mapped onto relatively LDR devices. The tone-mapping resultant image is obtained using chromatic and achromatic colors to avoid well-known color distortions shown in the conventional methods. The resulting image is then processed with a cone-response function wherein emphasis is placed on human visual perception (HVP). The proposed method covers the mismatch between the actual scene and the rendered image based on HVP. The experimental results show that the proposed method yields an improved color-rendering performance compared to conventional methods.
Submillimetre wave imaging and security: imaging performance and prediction
NASA Astrophysics Data System (ADS)
Appleby, R.; Ferguson, S.
2016-10-01
Within the European Commission Seventh Framework Programme (FP7), CONSORTIS (Concealed Object Stand-Off Real-Time Imaging for Security) has designed and is fabricating a stand-off system operating at sub-millimetre wave frequencies for the detection of objects concealed on people. This system scans people as they walk by the sensor. This paper presents the top level system design which brings together both passive and active sensors to provide good performance. The passive system operates in two bands between 100 and 600GHz and is based on a cryogen free cooled focal plane array sensor whilst the active system is a solid-state 340GHz radar. A modified version of OpenFX was used for modelling the passive system. This model was recently modified to include realistic location-specific skin temperature and to accept animated characters wearing up to three layers of clothing that move dynamically, such as those typically found in cinematography. Targets under clothing have been modelled and the performance simulated. The strengths and weaknesses of this modelling approach are discussed.
NASA Astrophysics Data System (ADS)
Kim, Hie-Sik; Nam, Chul; Ha, Kwan-Yong; Ayurzana, Odgeral; Kwon, Jong-Won
2005-12-01
The embedded systems have been applied to many fields, including households and industrial sites. The user interface technology with simple display on the screen was implemented more and more. The user demands are increasing and the system has more various applicable fields due to a high penetration rate of the Internet. Therefore, the demand for embedded system is tend to rise. An embedded system for image tracking was implemented. This system is used a fixed IP for the reliable server operation on TCP/IP networks. Using an USB camera on the embedded Linux system developed a real time broadcasting of video image on the Internet. The digital camera is connected at the USB host port of the embedded board. All input images from the video camera are continuously stored as a compressed JPEG file in a directory at the Linux web-server. And each frame image data from web camera is compared for measurement of displacement Vector. That used Block matching algorithm and edge detection algorithm for past speed. And the displacement vector is used at pan/tilt motor control through RS232 serial cable. The embedded board utilized the S3C2410 MPU, which used the ARM 920T core form Samsung. The operating system was ported to embedded Linux kernel and mounted of root file system. And the stored images are sent to the client PC through the web browser. It used the network function of Linux and it developed a program with protocol of the TCP/IP.
Visual information processing; Proceedings of the Meeting, Orlando, FL, Apr. 20-22, 1992
NASA Technical Reports Server (NTRS)
Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)
1992-01-01
Topics discussed in these proceedings include nonlinear processing and communications; feature extraction and recognition; image gathering, interpolation, and restoration; image coding; and wavelet transform. Papers are presented on noise reduction for signals from nonlinear systems; driving nonlinear systems with chaotic signals; edge detection and image segmentation of space scenes using fractal analyses; a vision system for telerobotic operation; a fidelity analysis of image gathering, interpolation, and restoration; restoration of images degraded by motion; and information, entropy, and fidelity in visual communication. Attention is also given to image coding methods and their assessment, hybrid JPEG/recursive block coding of images, modified wavelets that accommodate causality, modified wavelet transform for unbiased frequency representation, and continuous wavelet transform of one-dimensional signals by Fourier filtering.
Embedded image processing engine using ARM cortex-M4 based STM32F407 microcontroller
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samaiya, Devesh, E-mail: samaiya.devesh@gmail.com
2014-10-06
Due to advancement in low cost, easily available, yet powerful hardware and revolution in open source software, urge to make newer, more interactive machines and electronic systems have increased manifold among engineers. To make system more interactive, designers need easy to use sensor systems. Giving the boon of vision to machines was never easy, though it is not impossible these days; it is still not easy and expensive. This work presents a low cost, moderate performance and programmable Image processing engine. This Image processing engine is able to capture real time images, can store the images in the permanent storagemore » and can perform preprogrammed image processing operations on the captured images.« less
Goswami, R; Pi, D; Pal, J; Cheng, K; Hudoba De Badyn, M
2015-06-01
The study evaluated the performance of a dynamic imaging telepathology system (Panoptiq(™) ) as a diagnostic aid to the identification of peripheral blood film (PBF) abnormalities. The study assumed a laboratory personnel working in a clinical laboratory were operating the telepathology system to seek diagnostic opinion from an external consulting hematopathologist. The study examined 100 blood films, encompassing 23 different hematological diseases, reactive or normal cases. The study revealed that with real-time image transmission in live scanning mode of operation, the telepathology system was able to aid reviewers in achieving excellent accuracy, that is correct interpretation of morphologic abnormalities obtained in 83/84 of the hematologic diseases and 12/12 of the reactive/normal conditions (Sensitivity: 0.99; Specificity: 1.00). In contrast, when only saved static images in digital capture mode of operation were reviewed remotely, interpretative omissions occurred in 8/84 of the hematologic diseases and 0/12 of the reactive/normal conditions (Sensitivity: 0.91; Specificity: 1.00). It is hypothesized that real-time operator-reviewer communication during live scanning played an important role in the identification of key morphologic abnormalities for review. Our study showed the Panoptiq system can be adopted reliably as a dynamic telepathology tool in aiding community laboratories in the triage of PBF cases for external diagnostic consultation. © 2014 John Wiley & Sons Ltd.
Wide-Angle, Flat-Field Telescope
NASA Technical Reports Server (NTRS)
Hallam, K. L.; Howell, B. J.; Wilson, M. E.
1987-01-01
All-reflective system unvignetted. Wide-angle telescope uses unobstructed reflecting elements to produce flat image. No refracting elements, no chromatic aberration, and telescope operates over spectral range from infrared to far ultraviolet. Telescope used with such image detectors as photographic firm, vidicons, and solid-state image arrays.
SU-E-I-06: Measurement of Skin Dose from Dental Cone-Beam CT Scans.
Akyalcin, S; English, J; Abramovitch, K; Rong, J
2012-06-01
To directly measure skin dose using point-dosimeters from dental cone-beam CT (CBCT) scans. To compare the results among three different dental CBCT scanners and compare the CBCT results with those from a conventional panoramic and cephalomic dental imaging system. A head anthropomorphic phantom was used with nanoDOT dosimeters attached to specified anatomic landmarks of selected radiosensitive tissues of interest. To ensure reliable measurement results, three dosimeters were used for each location. The phantom was scanned under various modes of operation and scan protocols for typical dental exams on three dental CBCT systems plus a conventional dental imaging system. The Landauer OSL nanoDOT dosimeters were calibrated under the same imaging condition as the head phantom scan protocols, and specifically for each of the imaging systems. Using nanoDOT dosimeters, skin doses at several positions on the surface of an adult head anthropomorphic phantom were measured for clinical dental imaging. The measured skin doses ranged from 0.04 to 4.62mGy depending on dosimeter positions and imaging systems. The highest dose location was at the parotid surface for all three CBCT scanners. The surface doses to the locations of the eyes were ∼4.0mGy, well below the 500mGy threshold for possibly causing cataract development. The results depend on x-ray tube output (kVp and mAs) and also are sensitive to SFOV. Comparing to the conventional dental imaging system operated in panoramic and cephalometric modes, doses from all three CBCT systems were at least an order of magnitude higher. No image artifact was caused by presence of nanoDOT dosimeters in the head phantom images. Direct measurements of skin dose using nanoDOT dosimeters provided accurate skin dose values without any image artifacts. The results of skin dose measurements serve as dose references in guiding future dose optimization efforts in dental CBCT imaging. © 2012 American Association of Physicists in Medicine.
EDITORIAL: Imaging systems and techniques Imaging systems and techniques
NASA Astrophysics Data System (ADS)
Yang, Wuqiang; Giakos, George; Nikita, Konstantina; Pastorino, Matteo; Karras, Dimitrios
2009-10-01
The papers in this special issue focus on providing the state-of-the-art approaches and solutions to some of the most challenging imaging areas, such as the design, development, evaluation and applications of imaging systems, measuring techniques, image processing algorithms and instrumentation, with an ultimate aim of enhancing the measurement accuracy and image quality. This special issue explores the principles, engineering developments and applications of new imaging systems and techniques, and encourages broad discussion of imaging methodologies, shaping the future and identifying emerging trends. The multi-faceted field of imaging requires drastic adaptation to the rapid changes in our society, economy, environment and technological evolution. There is an urgent need to address new problems, which tend to be either static but complex, or dynamic, e.g. rapidly evolving with time, with many unknowns, and to propose innovative solutions. For instance, the battles against cancer and terror, monitoring of space resources and enhanced awareness, management of natural resources and environmental monitoring are some of the areas that need to be addressed. The complexity of the involved imaging scenarios and demanding design parameters, e.g. speed, signal-to-noise ratio (SNR), specificity, contrast, spatial resolution, scatter rejection, complex background and harsh environments, necessitate the development of a multi-functional, scalable and efficient imaging suite of sensors, solutions driven by innovation, and operation on diverse detection and imaging principles. Efficient medical imaging techniques capable of providing physiological information at the molecular level present another important research area. Advanced metabolic and functional imaging techniques, operating on multiple physical principles, and using high-resolution, high-selectivity nano-imaging methods, quantum dots, nanoparticles, biomarkers, nanostructures, nanosensors, micro-array imaging chips and nano-clinics for optical diagnostics and targeted therapy, can play an important role in the diagnosis and treatment of cancer. These techniques can also be used to provide efficient drug delivery for treatment of other diseases, with increased sensitivity and specificity. Similarly, enhanced stand-off detection, classification, identification and surveillance techniques, for comprehensive civilian and military target protection and enhanced space situational awareness can open new frontiers of research and applications in the defence arena and homeland security. For instance, the development of potential imaging sensor architectures, enhanced remote sensing systems, ladars, lidars and radars can provide data capable of ensuring continuous monitoring of various imaging/physical/chemical parameters under different operating conditions, using both active and passive detection principles, reconfigurable and scalable focal plane array architectures, reliable systems for stand-off detection of explosives, and enhanced airport security. The above areas pose challenging problems to the technical community and indicate an ever-growing need for innovative and auspicious solutions. We would like to thank all authors for their valuable contributions, without which this special issue would not have become reality.
NASA Astrophysics Data System (ADS)
Dindar, Serdar; Kaewunruen, Sakdirat; Osman, Mohd H.
2017-10-01
One of the emerging significant advances in engineering, satellite imaging (SI) is becoming very common in any kind of civil engineering projects e.g., bridge, canal, dam, earthworks, power plant, water works etc., to provide an accurate, economical and expeditious means of acquiring a rapid assessment. Satellite imaging services in general utilise combinations of high quality satellite imagery, image processing and interpretation to obtain specific required information, e.g. surface movement analysis. To extract, manipulate and provide such a precise knowledge, several systems, including geographic information systems (GIS) and global positioning system (GPS), are generally used for orthorectification. Although such systems are useful for mitigating risk from projects, their productiveness is arguable and operational risk after application is open to discussion. As the applicability of any novel application to the railway industry is often measured in terms of whether or not it has gained in-depth knowledge and to what degree, as a result of errors during its operation, this novel application generates risk in ongoing projects. This study reviews what can be achievable for risk management of railway turnouts thorough satellite imaging. The methodology is established on the basis of other published articles in this area and the results of applications to understand how applicable such imagining process is on railway turnouts, and how sub-systems in turnouts can be effectively traced/operated with less risk than at present. As a result of this review study, it is aimed that the railway sector better understands risk mitigation in particular applications.
Computed tomography image-guided surgery in complex acetabular fractures.
Brown, G A; Willis, M C; Firoozbakhsh, K; Barmada, A; Tessman, C L; Montgomery, A
2000-01-01
Eleven complex acetabular fractures in 10 patients were treated by open reduction with internal fixation incorporating computed tomography image guided software intraoperatively. Each of the implants placed under image guidance was found to be accurate and without penetration of the pelvis or joint space. The setup time for the system was minimal. Accuracy in the range of 1 mm was found when registration was precise (eight cases) and was in the range of 3.5 mm when registration was only approximate (three cases). Added benefits included reduced intraoperative fluoroscopic time, less need for more extensive dissection, and obviation of additional surgical approaches in some cases. Compared with a series of similar fractures treated before this image guided series, the reduction in operative time was significant. For patients with complex anterior and posterior combined fractures, the average operation times with and without application of three-dimensional imaging technique were, respectively, 5 hours 15 minutes and 6 hours 14 minutes, revealing 16% less operative time for those who had surgery using image guidance. In the single column fracture group, the operation time for those with three-dimensional imaging application, was 2 hours 58 minutes and for those with traditional surgery, 3 hours 42 minutes, indicating 20% less operative time for those with imaging modality. Intraoperative computed tomography guided imagery was found to be an accurate and suitable method for use in the operative treatment of complex acetabular fractures with substantial displacement.
Feasibility study of low-dose intra-operative cone-beam CT for image-guided surgery
NASA Astrophysics Data System (ADS)
Han, Xiao; Shi, Shuanghe; Bian, Junguo; Helm, Patrick; Sidky, Emil Y.; Pan, Xiaochuan
2011-03-01
Cone-beam computed tomography (CBCT) has been increasingly used during surgical procedures for providing accurate three-dimensional anatomical information for intra-operative navigation and verification. High-quality CBCT images are in general obtained through reconstruction from projection data acquired at hundreds of view angles, which is associated with a non-negligible amount of radiation exposure to the patient. In this work, we have applied a novel image-reconstruction algorithm, the adaptive-steepest-descent-POCS (ASD-POCS) algorithm, to reconstruct CBCT images from projection data at a significantly reduced number of view angles. Preliminary results from experimental studies involving both simulated data and real data show that images of comparable quality to those presently available in clinical image-guidance systems can be obtained by use of the ASD-POCS algorithm from a fraction of the projection data that are currently used. The result implies potential value of the proposed reconstruction technique for low-dose intra-operative CBCT imaging applications.
Positron emission tomography and optical tissue imaging
Falen, Steven W [Carmichael, CA; Hoefer, Richard A [Newport News, VA; Majewski, Stanislaw [Yorktown, VA; McKisson, John [Hampton, VA; Kross, Brian [Yorktown, VA; Proffitt, James [Newport News, VA; Stolin, Alexander [Newport News, VA; Weisenberger, Andrew G [Yorktown, VA
2012-05-22
A mobile compact imaging system that combines both PET imaging and optical imaging into a single system which can be located in the operating room (OR) and provides faster feedback to determine if a tumor has been fully resected and if there are adequate surgical margins. While final confirmation is obtained from the pathology lab, such a device can reduce the total time necessary for the procedure and the number of iterations required to achieve satisfactory resection of a tumor with good margins.
NASA Astrophysics Data System (ADS)
Dumpuri, Prashanth; Clements, Logan W.; Li, Rui; Waite, Jonathan M.; Stefansic, James D.; Geller, David A.; Miga, Michael I.; Dawant, Benoit M.
2009-02-01
Preoperative planning combined with image-guidance has shown promise towards increasing the accuracy of liver resection procedures. The purpose of this study was to validate one such preoperative planning tool for four patients undergoing hepatic resection. Preoperative computed tomography (CT) images acquired before surgery were used to identify tumor margins and to plan the surgical approach for resection of these tumors. Surgery was then performed with intraoperative digitization data acquire by an FDA approved image-guided liver surgery system (Pathfinder Therapeutics, Inc., Nashville, TN). Within 5-7 days after surgery, post-operative CT image volumes were acquired. Registration of data within a common coordinate reference was achieved and preoperative plans were compared to the postoperative volumes. Semi-quantitative comparisons are presented in this work and preliminary results indicate that significant liver regeneration/hypertrophy in the postoperative CT images may be present post-operatively. This could challenge pre/post operative CT volume change comparisons as a means to evaluate the accuracy of preoperative surgical plans.
Integrated editing system for Japanese text and image information "Linernote"
NASA Astrophysics Data System (ADS)
Tanaka, Kazuto
Integrated Japanese text editing system "Linernote" developed by Toyo Industries Co. is explained. The system has been developed on the concept of electronic publishing. It is composed of personal computer NEC PC-9801 VX and other peripherals. Sentence, drawing and image data is inputted and edited under the integrated operating environment in the system and final text is printed out by laser printer. Handling efficiency of time consuming work such as pattern input or page make up has been improved by draft image data indication method on CRT. It is the latest DTP system equipped with three major functions, namly, typesetting for high quality text editing, easy drawing/tracing and high speed image processing.
The Lixiscope: a Pocket-size X-ray Imaging System
NASA Technical Reports Server (NTRS)
Yin, L. I.; Seltzer, S. M.
1978-01-01
A Low Intensity X ray Imaging device with the acronym LIXISCOPE is described. The Lixiscope has a small format and is powered only by a 2.7V battery. The high inherent gain of the Lixiscope permits the use of radioactive sources in lieu of X-ray machines in some fluoroscopic applications. In this mode of operation the complete X ray imaging system is truly portable and pocket-sized.
Yoshihiro, Akiko; Nakata, Norio; Harada, Junta; Tada, Shimpei
2002-01-01
Although local area networks (LANs) are commonplace in hospital-based radiology departments today, wireless LANs are still relatively unknown and untried. A linked wireless reporting system was developed to improve work throughput and efficiency. It allows radiologists, physicians, and technologists to review current radiology reports and images and instantly compare them with reports and images from previous examinations. This reporting system also facilitates creation of teaching files quickly, easily, and accurately. It consists of a Digital Imaging and Communications in Medicine 3.0-based picture archiving and communication system (PACS), a diagnostic report server, and portable laptop computers. The PACS interfaces with magnetic resonance imagers, computed tomographic scanners, and computed radiography equipment. The same kind of functionality is achievable with a wireless LAN as with a wired LAN, with comparable bandwidth but with less cabling infrastructure required. This wireless system is presently incorporated into the operations of the emergency and radiology departments, with future plans calling for applications in operating rooms, outpatient departments, all hospital wards, and intensive care units. No major problems have been encountered with the system, which is in constant use and appears to be quite successful. Copyright RSNA, 2002
Multigeneration data migration from legacy systems
NASA Astrophysics Data System (ADS)
Ratib, Osman M.; Liu, Brent J.; Kho, Hwa T.; Tao, Wenchao; Wang, Cun; McCoy, J. Michael
2003-05-01
The migration of image data from different generations of legacy archive systems represents a technical challenge and in incremental cost in transitions to newer generations of PACS. UCLA medical center has elected to completely replace the existing PACS infrastructure encompassing several generations of legacy systems by a new commercial system providing enterprise-wide image management and communication. One of the most challenging parts of the project was the migration of large volumes of legacy images into the new system. Planning of the migration required the development of specialized software and hardware, and included different phases of data mediation from existing databases to the new PACS database prior to the migration of the image data. The project plan included a detailed analysis of resources and cost of data migration to optimize the process and minimize the delay of a hybrid operation where the legacy systems need to remain operational. Our analysis and project planning showed that the data migration represents the most critical path in the process of PACS renewal. Careful planning and optimization of the project timeline and resources allocated is critical to minimize the financial impact and the time delays that such migrations can impose on the implementation plan.
Jaya, T; Dheeba, J; Singh, N Albert
2015-12-01
Diabetic retinopathy is a major cause of vision loss in diabetic patients. Currently, there is a need for making decisions using intelligent computer algorithms when screening a large volume of data. This paper presents an expert decision-making system designed using a fuzzy support vector machine (FSVM) classifier to detect hard exudates in fundus images. The optic discs in the colour fundus images are segmented to avoid false alarms using morphological operations and based on circular Hough transform. To discriminate between the exudates and the non-exudates pixels, colour and texture features are extracted from the images. These features are given as input to the FSVM classifier. The classifier analysed 200 retinal images collected from diabetic retinopathy screening programmes. The tests made on the retinal images show that the proposed detection system has better discriminating power than the conventional support vector machine. With the best combination of FSVM and features sets, the area under the receiver operating characteristic curve reached 0.9606, which corresponds to a sensitivity of 94.1% with a specificity of 90.0%. The results suggest that detecting hard exudates using FSVM contribute to computer-assisted detection of diabetic retinopathy and as a decision support system for ophthalmologists.
Real-Time Imaging with a Pulsed Coherent CO, Laser Radar
1997-01-01
30 joule) transmitted energy levels has just begun. The FLD program will conclude in 1997 with the demonstration of a full-up, real - time operating system . This...The master system and VMEbus controller is an off-the-shelf controller based on the Motorola 68040 processor running the VxWorks real time operating system . Application
Takamatsu, Daiko; Yoneyama, Akio; Asari, Yusuke; Hirano, Tatsumi
2018-02-07
A fundamental understanding of concentrations of salts in lithium-ion battery electrolytes during battery operation is important for optimal operation and design of lithium-ion batteries. However, there are few techniques that can be used to quantitatively characterize salt concentration distributions in the electrolytes during battery operation. In this paper, we demonstrate that in operando X-ray phase imaging can quantitatively visualize the salt concentration distributions that arise in electrolytes during battery operation. From quantitative evaluation of the concentration distributions at steady states, we obtained the salt diffusivities in electrolytes with different initial salt concentrations. Because of no restriction on samples and high temporal and spatial resolutions, X-ray phase imaging will be a versatile technique for evaluating electrolytes, both aqueous and nonaqueous, of many electrochemical systems.
Venus Aerobot Surface Science Imaging System (VASSIS)
NASA Technical Reports Server (NTRS)
Greeley, Ronald
1999-01-01
The VASSIS task was to design and develop an imaging system and container for operation above the surface of Venus in preparation for a Discovery-class mission involving a Venus aerobot balloon. The technical goals of the effort were to: a) evaluate the possible nadir-viewed surface image quality as a function of wavelength and altitude in the Venus lower atmosphere, b) design a pressure vessel to contain the imager and supporting electronics that will meet the environmental requirements of the VASSIS mission, c) design and build a prototype imaging system including an Active-Pixel Sensor camera head and VASSIS-like optics that will meet the science requirements. The VASSIS science team developed a set of science requirements for the imaging system upon which the development work of this task was based.
Tsuchiya, Masahiko; Mizutani, Koh; Funai, Yusuke; Nakamoto, Tatsuo
2016-02-01
Ultrasound-guided procedures may be easier to perform when the operator's eye axis, needle puncture site, and ultrasound image display form a straight line in the puncture direction. However, such methods have not been well tested in clinical settings because that arrangement is often impossible due to limited space in the operating room. We developed a wireless remote display system for ultrasound devices using a tablet computer (iPad Mini), which allows easy display of images at nearly any location chosen by the operator. We hypothesized that the in-line layout of ultrasound images provided by this system would allow for secure and quick catheterization of the radial artery. We enrolled first-year medical interns (n = 20) who had no prior experience with ultrasound-guided radial artery catheterization to perform that using a short-axis out-of-plane approach with two different methods. With the conventional method, only the ultrasound machine placed at the side of the head of the patient across the targeted forearm was utilized. With the tablet method, the ultrasound images were displayed on an iPad Mini positioned on the arm in alignment with the operator's eye axis and needle puncture direction. The success rate and time required for catheterization were compared between the two methods. Success rate was significantly higher (100 vs. 70 %, P = 0.02) and catheterization time significantly shorter (28.5 ± 7.5 vs. 68.2 ± 14.3 s, P < 0.001) with the tablet method as compared to the conventional method. An ergonomic straight arrangement of the image display is crucial for successful and quick completion of ultrasound-guided arterial catheterization. The present remote display system is a practical method for providing such an arrangement.
ACTIM: an EDA initiated study on spectral active imaging
NASA Astrophysics Data System (ADS)
Steinvall, O.; Renhorn, I.; Ahlberg, J.; Larsson, H.; Letalick, D.; Repasi, E.; Lutzmann, P.; Anstett, G.; Hamoir, D.; Hespel, L.; Boucher, Y.
2010-10-01
This paper will describe ongoing work from an EDA initiated study on Active Imaging with emphasis of using multi or broadband spectral lasers and receivers. Present laser based imaging and mapping systems are mostly based on a fixed frequency lasers. On the other hand great progress has recently occurred in passive multi- and hyperspectral imaging with applications ranging from environmental monitoring and geology to mapping, military surveillance, and reconnaissance. Data bases on spectral signatures allow the possibility to discriminate between different materials in the scene. Present multi- and hyperspectral sensors mainly operate in the visible and short wavelength region (0.4-2.5 μm) and rely on the solar radiation giving shortcoming due to shadows, clouds, illumination angles and lack of night operation. Active spectral imaging however will largely overcome these difficulties by a complete control of the illumination. Active illumination enables spectral night and low-light operation beside a robust way of obtaining polarization and high resolution 2D/3D information. Recent development of broadband lasers and advanced imaging 3D focal plane arrays has led to new opportunities for advanced spectral and polarization imaging with high range resolution. Fusing the knowledge of ladar and passive spectral imaging will result in new capabilities in the field of EO-sensing to be shown in the study. We will present an overview of technology, systems and applications for active spectral imaging and propose future activities in connection with some prioritized applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rossano, G.S.
1989-02-01
A microcomputer based data acquisition system has been developed for astronomical observing with two-dimensional infrared detector arrays operating at high pixel rates. The system is based on a 16-bit 8086/8087 microcomputer operating at 10 MHz. Data rates of up to 560,000 pixels/sec from arrays of up to 4096 elements are supported using the microcomputer system alone. A hardware co-adder the authors are developing permits data accumulation at rates of up to 1.67 million pixels/sec in both staring and chopped data acquisition modes. The system has been used for direct imaging and for data acquisition in a Fabry-Perot Spectrometer developed bymore » NRL. The hardware is operated using interactive software which supports the several available modes of data acquisition, and permits data display and reduction during observing sessions.« less
NASA Astrophysics Data System (ADS)
Blume, H.; Alexandru, R.; Applegate, R.; Giordano, T.; Kamiya, K.; Kresina, R.
1986-06-01
In a digital diagnostic imaging department, the majority of operations for handling and processing of images can be grouped into a small set of basic operations, such as image data buffering and storage, image processing and analysis, image display, image data transmission and image data compression. These operations occur in almost all nodes of the diagnostic imaging communications network of the department. An image processor architecture was developed in which each of these functions has been mapped into hardware and software modules. The modular approach has advantages in terms of economics, service, expandability and upgradeability. The architectural design is based on the principles of hierarchical functionality, distributed and parallel processing and aims at real time response. Parallel processing and real time response is facilitated in part by a dual bus system: a VME control bus and a high speed image data bus, consisting of 8 independent parallel 16-bit busses, capable of handling combined up to 144 MBytes/sec. The presented image processor is versatile enough to meet the video rate processing needs of digital subtraction angiography, the large pixel matrix processing requirements of static projection radiography, or the broad range of manipulation and display needs of a multi-modality diagnostic work station. Several hardware modules are described in detail. For illustrating the capabilities of the image processor, processed 2000 x 2000 pixel computed radiographs are shown and estimated computation times for executing the processing opera-tions are presented.
Real-time inspection by submarine images
NASA Astrophysics Data System (ADS)
Tascini, Guido; Zingaretti, Primo; Conte, Giuseppe
1996-10-01
A real-time application of computer vision concerning tracking and inspection of a submarine pipeline is described. The objective is to develop automatic procedures for supporting human operators in the real-time analysis of images acquired by means of cameras mounted on underwater remotely operated vehicles (ROV) Implementation of such procedures gives rise to a human-machine system for underwater pipeline inspection that can automatically detect and signal the presence of the pipe, of its structural or accessory elements, and of dangerous or alien objects in its neighborhood. The possibility of modifying the image acquisition rate in the simulations performed on video- recorded images is used to prove that the system performs all necessary processing with an acceptable robustness working in real-time up to a speed of about 2.5 kn, widely greater than that the actual ROVs and the security features allow.
Broadband image sensor array based on graphene-CMOS integration
NASA Astrophysics Data System (ADS)
Goossens, Stijn; Navickaite, Gabriele; Monasterio, Carles; Gupta, Shuchi; Piqueras, Juan José; Pérez, Raúl; Burwell, Gregory; Nikitskiy, Ivan; Lasanta, Tania; Galán, Teresa; Puma, Eric; Centeno, Alba; Pesquera, Amaia; Zurutuza, Amaia; Konstantatos, Gerasimos; Koppens, Frank
2017-06-01
Integrated circuits based on complementary metal-oxide-semiconductors (CMOS) are at the heart of the technological revolution of the past 40 years, enabling compact and low-cost microelectronic circuits and imaging systems. However, the diversification of this platform into applications other than microcircuits and visible-light cameras has been impeded by the difficulty to combine semiconductors other than silicon with CMOS. Here, we report the monolithic integration of a CMOS integrated circuit with graphene, operating as a high-mobility phototransistor. We demonstrate a high-resolution, broadband image sensor and operate it as a digital camera that is sensitive to ultraviolet, visible and infrared light (300-2,000 nm). The demonstrated graphene-CMOS integration is pivotal for incorporating 2D materials into the next-generation microelectronics, sensor arrays, low-power integrated photonics and CMOS imaging systems covering visible, infrared and terahertz frequencies.
Image quality specification and maintenance for airborne SAR
NASA Astrophysics Data System (ADS)
Clinard, Mark S.
2004-08-01
Specification, verification, and maintenance of image quality over the lifecycle of an operational airborne SAR begin with the specification for the system itself. Verification of image quality-oriented specification compliance can be enhanced by including a specification requirement that a vendor provide appropriate imagery at the various phases of the system life cycle. The nature and content of the imagery appropriate for each stage of the process depends on the nature of the test, the economics of collection, and the availability of techniques to extract the desired information from the data. At the earliest lifecycle stages, Concept and Technology Development (CTD) and System Development and Demonstration (SDD), the test set could include simulated imagery to demonstrate the mathematical and engineering concepts being implemented thus allowing demonstration of compliance, in part, through simulation. For Initial Operational Test and Evaluation (IOT&E), imagery collected from precisely instrumented test ranges and targets of opportunity consisting of a priori or a posteriori ground-truthed cultural and natural features are of value to the analysis of product quality compliance. Regular monitoring of image quality is possible using operational imagery and automated metrics; more precise measurements can be performed with imagery of instrumented scenes, when available. A survey of image quality measurement techniques is presented along with a discussion of the challenges of managing an airborne SAR program with the scarce resources of time, money, and ground-truthed data. Recommendations are provided that should allow an improvement in the product quality specification and maintenance process with a minimal increase in resource demands on the customer, the vendor, the operational personnel, and the asset itself.
NASA Technical Reports Server (NTRS)
Marthaler, J. G.; Heighway, J. E.
1979-01-01
An iceberg detection and identification system consisting of a moderate resolution Side Looking Airborne Radar (SLAR) interfaced with a Radar Image Processor (RIP) based on a ROLM 1664 computer with a 32K core memory updatable to 64K is described. The system can be operated in high- or low-resolution sampling modes. Specifically designed algorithms are applied to digitized signal returns to provide automatic target detection and location, geometrically correct video image display and data recording. The real aperture Motorola AN/APS-94D SLAR operates in the X-band and is tunable between 9.10 and 9.40 GHz; its output power is 45 kW peak with a pulse repetition rate of 750 pulses per hour. Schematic diagrams of the system are provided, together with preliminary test data.
Wave Phase-Sensitive Transformation of 3d-Straining of Mechanical Fields
NASA Astrophysics Data System (ADS)
Smirnov, I. N.; Speranskiy, A. A.
2015-11-01
It is the area of research of oscillatory processes in elastic mechanical systems. Technical result of innovation is creation of spectral set of multidimensional images which reflect time-correlated three-dimensional vector parameters of metrological, and\\or estimated, and\\or design parameters of oscillations in mechanical systems. Reconstructed images of different dimensionality integrated in various combinations depending on their objective function can be used as homeostatic profile or cybernetic image of oscillatory processes in mechanical systems for an objective estimation of current operational conditions in real time. The innovation can be widely used to enhance the efficiency of monitoring and research of oscillation processes in mechanical systems (objects) in construction, mechanical engineering, acoustics, etc. Concept method of vector vibrometry based on application of vector 3D phase- sensitive vibro-transducers permits unique evaluation of real stressed-strained states of power aggregates and loaded constructions and opens fundamental innovation opportunities: conduct of continuous (on-line regime) reliable monitoring of turboagregates of electrical machines, compressor installations, bases, supports, pipe-lines and other objects subjected to damaging effect of vibrations; control of operational safety of technical systems at all the stages of life cycle including design, test production, tuning, testing, operational use, repairs and resource enlargement; creation of vibro-diagnostic systems of authentic non-destructive control of anisotropic characteristics of materials resistance of power aggregates and loaded constructions under outer effects and operational flaws. The described technology is revolutionary, universal and common for all branches of engineering industry and construction building objects.
Design and implementation of the flight dynamics system for COMS satellite mission operations
NASA Astrophysics Data System (ADS)
Lee, Byoung-Sun; Hwang, Yoola; Kim, Hae-Yeon; Kim, Jaehoon
2011-04-01
The first Korean multi-mission geostationary Earth orbit satellite, Communications, Ocean, and Meteorological Satellite (COMS) was launched by an Ariane 5 launch vehicle in June 26, 2010. The COMS satellite has three payloads including Ka-band communications, Geostationary Ocean Color Imager, and Meteorological Imager. Although the COMS spacecraft bus is based on the Astrium Eurostar 3000 series, it has only one solar array to the south panel because all of the imaging sensors are located on the north panel. In order to maintain the spacecraft attitude with 5 wheels and 7 thrusters, COMS should perform twice a day wheel off-loading thruster firing operations, which affect on the satellite orbit. COMS flight dynamics system provides the general on-station functions such as orbit determination, orbit prediction, event prediction, station-keeping maneuver planning, station-relocation maneuver planning, and fuel accounting. All orbit related functions in flight dynamics system consider the orbital perturbations due to wheel off-loading operations. There are some specific flight dynamics functions to operate the spacecraft bus such as wheel off-loading management, oscillator updating management, and on-station attitude reacquisition management. In this paper, the design and implementation of the COMS flight dynamics system is presented. An object oriented analysis and design methodology is applied to the flight dynamics system design. Programming language C# within Microsoft .NET framework is used for the implementation of COMS flight dynamics system on Windows based personal computer.
NASA Astrophysics Data System (ADS)
Arvidson, R. E.; Squyres, S. W.; Baumgartner, E. T.; Schenker, P. S.; Niebur, C. S.; Larsen, K. W.; SeelosIV, F. P.; Snider, N. O.; Jolliff, B. L.
2002-08-01
The Field Integration Design and Operations (FIDO) prototype Mars rover was deployed and operated remotely for 2 weeks in May 2000 in the Black Rock Summit area of Nevada. The blind science operation trials were designed to evaluate the extent to which FIDO-class rovers can be used to conduct traverse science and collect samples. FIDO-based instruments included stereo cameras for navigation and imaging, an infrared point spectrometer, a color microscopic imager for characterization of rocks and soils, and a rock drill for core acquisition. Body-mounted ``belly'' cameras aided drill deployment, and front and rear hazard cameras enabled terrain hazard avoidance. Airborne Visible and Infrared Imaging Spectrometer (AVIRIS) data, a high spatial resolution IKONOS orbital image, and a suite of descent images were used to provide regional- and local-scale terrain and rock type information, from which hypotheses were developed for testing during operations. The rover visited three sites, traversed 30 m, and acquired 1.3 gigabytes of data. The relatively small traverse distance resulted from a geologically rich site in which materials identified on a regional scale from remote-sensing data could be identified on a local scale using rover-based data. Results demonstrate the synergy of mapping terrain from orbit and during descent using imaging and spectroscopy, followed by a rover mission to test inferences and to make discoveries that can be accomplished only with surface mobility systems.
Wei, Chen-Wei; Nguyen, Thu-Mai; Xia, Jinjun; Arnal, Bastien; Wong, Emily Y.; Pelivanov, Ivan M.; O’Donnell, Matthew
2015-01-01
Because of depth-dependent light attenuation, bulky, low-repetition-rate lasers are usually used in most photoacoustic (PA) systems to provide sufficient pulse energies to image at depth within the body. However, integrating these lasers with real-time clinical ultrasound (US) scanners has been problematic because of their size and cost. In this paper, an integrated PA/US (PAUS) imaging system is presented operating at frame rates >30 Hz. By employing a portable, low-cost, low-pulse-energy (~2 mJ/pulse), high-repetition-rate (~1 kHz), 1053-nm laser, and a rotating galvo-mirror system enabling rapid laser beam scanning over the imaging area, the approach is demonstrated for potential applications requiring a few centimeters of penetration. In particular, we demonstrate here real-time (30 Hz frame rate) imaging (by combining multiple single-shot sub-images covering the scan region) of an 18-gauge needle inserted into a piece of chicken breast with subsequent delivery of an absorptive agent at more than 1-cm depth to mimic PAUS guidance of an interventional procedure. A signal-to-noise ratio of more than 35 dB is obtained for the needle in an imaging area 2.8 × 2.8 cm (depth × lateral). Higher frame rate operation is envisioned with an optimized scanning scheme. PMID:25643081
Krolopp, Ádám; Csákányi, Attila; Haluszka, Dóra; Csáti, Dániel; Vass, Lajos; Kolonics, Attila; Wikonkál, Norbert; Szipőcs, Róbert
2016-01-01
A novel, Yb-fiber laser based, handheld 2PEF/SHG microscope imaging system is introduced. It is suitable for in vivo imaging of murine skin at an average power level as low as 5 mW at 200 kHz sampling rate. Amplified and compressed laser pulses having a spectral bandwidth of 8 to 12 nm at around 1030 nm excite the biological samples at a ~1.89 MHz repetition rate, which explains how the high quality two-photon excitation fluorescence (2PEF) and second harmonic generation (SHG) images are obtained at the average power level of a laser pointer. The scanning, imaging and detection head, which comprises a conventional microscope objective for beam focusing, has a physical length of ~180 mm owing to the custom designed imaging telescope system between the laser scanner mirrors and the entrance aperture of the microscope objective. Operation of the all-fiber, all-normal dispersion Yb-fiber ring laser oscillator is electronically controlled by a two-channel polarization controller for Q-switching free mode-locked operation. The whole nonlinear microscope imaging system has the main advantages of the low price of the fs laser applied, fiber optics flexibility, a relatively small, light-weight scanning and detection head, and a very low risk of thermal or photochemical damage of the skin samples. PMID:27699118
Krolopp, Ádám; Csákányi, Attila; Haluszka, Dóra; Csáti, Dániel; Vass, Lajos; Kolonics, Attila; Wikonkál, Norbert; Szipőcs, Róbert
2016-09-01
A novel, Yb-fiber laser based, handheld 2PEF/SHG microscope imaging system is introduced. It is suitable for in vivo imaging of murine skin at an average power level as low as 5 mW at 200 kHz sampling rate. Amplified and compressed laser pulses having a spectral bandwidth of 8 to 12 nm at around 1030 nm excite the biological samples at a ~1.89 MHz repetition rate, which explains how the high quality two-photon excitation fluorescence (2PEF) and second harmonic generation (SHG) images are obtained at the average power level of a laser pointer. The scanning, imaging and detection head, which comprises a conventional microscope objective for beam focusing, has a physical length of ~180 mm owing to the custom designed imaging telescope system between the laser scanner mirrors and the entrance aperture of the microscope objective. Operation of the all-fiber, all-normal dispersion Yb-fiber ring laser oscillator is electronically controlled by a two-channel polarization controller for Q-switching free mode-locked operation. The whole nonlinear microscope imaging system has the main advantages of the low price of the fs laser applied, fiber optics flexibility, a relatively small, light-weight scanning and detection head, and a very low risk of thermal or photochemical damage of the skin samples.
Full resolution hologram-like autostereoscopic display
NASA Technical Reports Server (NTRS)
Eichenlaub, Jesse B.; Hutchins, Jamie
1995-01-01
Under this program, Dimension Technologies Inc. (DTI) developed a prototype display that uses a proprietary illumination technique to create autostereoscopic hologram-like full resolution images on an LCD operating at 180 fps. The resulting 3D image possesses a resolution equal to that of the LCD along with properties normally associated with holograms, including change of perspective with observer position and lack of viewing position restrictions. Furthermore, this autostereoscopic technique eliminates the need to wear special glasses to achieve the parallax effect. Under the program a prototype display was developed which demonstrates the hologram-like full resolution concept. To implement such a system, DTI explored various concept designs and enabling technologies required to support those designs. Specifically required were: a parallax illumination system with sufficient brightness and control; an LCD with rapid address and pixel response; and an interface to an image generation system for creation of computer graphics. Of the possible parallax illumination system designs, we chose a design which utilizes an array of fluorescent lamps. This system creates six sets of illumination areas to be imaged behind an LCD. This controlled illumination array is interfaced to a lenticular lens assembly which images the light segments into thin vertical light lines to achieve the parallax effect. This light line formation is the foundation of DTI's autostereoscopic technique. The David Sarnoff Research Center (Sarnoff) was subcontracted to develop an LCD that would operate with a fast scan rate and pixel response. Sarnoff chose a surface mode cell technique and produced the world's first large area pi-cell active matrix TFT LCD. The device provided adequate performance to evaluate five different perspective stereo viewing zones. A Silicon Graphics' Iris Indigo system was used for image generation which allowed for static and dynamic multiple perspective image rendering. During the development of the prototype display, we identified many critical issues associated with implementing such a technology. Testing and evaluation enabled us to prove that this illumination technique provides autostereoscopic 3D multi perspective images with a wide range of view, smooth transition, and flickerless operation given suitable enabling technologies.
Software architecture for intelligent image processing using Prolog
NASA Astrophysics Data System (ADS)
Jones, Andrew C.; Batchelor, Bruce G.
1994-10-01
We describe a prototype system for interactive image processing using Prolog, implemented by the first author on an Apple Macintosh computer. This system is inspired by Prolog+, but differs from it in two particularly important respects. The first is that whereas Prolog+ assumes the availability of dedicated image processing hardware, with which the Prolog system communicates, our present system implements image processing functions in software using the C programming language. The second difference is that although our present system supports Prolog+ commands, these are implemented in terms of lower-level Prolog predicates which provide a more flexible approach to image manipulation. We discuss the impact of the Apple Macintosh operating system upon the implementation of the image-processing functions, and the interface between these functions and the Prolog system. We also explain how the Prolog+ commands have been implemented. The system described in this paper is a fairly early prototype, and we outline how we intend to develop the system, a task which is expedited by the extensible architecture we have implemented.
Real-time implementation of a dual-mode ultrasound array system: in vivo results.
Casper, Andrew J; Liu, Dalong; Ballard, John R; Ebbini, Emad S
2013-10-01
A real-time dual-mode ultrasound array (DMUA) system for imaging and therapy is described. The system utilizes a concave (40-mm radius of curvature) 3.5 MHz, 32 element array, and modular multichannel transmitter/receiver. The system is capable of operating in a variety of imaging and therapy modes (on transmit) and continuous receive on all array elements even during high-power operation. A signal chain consisting of field-programmable gate arrays and graphical processing units is used to enable real time, software-defined beamforming and image formation. Imaging data, from quality assurance phantoms as well as in vivo small- and large-animal models, are presented and discussed. Corresponding images obtained using a temporally-synchronized and spatially-aligned diagnostic probe confirm the DMUA's ability to form anatomically-correct images with sufficient contrast in an extended field of view around its geometric center. In addition, high-frame rate DMUA data also demonstrate the feasibility of detection and localization of echo changes indicative of cavitation and/or tissue boiling during high-intensity focused ultrasound exposures with 45-50 dB dynamic range. The results also show that the axial and lateral resolution of the DMUA are consistent with its f(number) and bandwidth with well-behaved speckle cell characteristics. These results point the way to a theranostic DMUA system capable of quantitative imaging of tissue property changes with high specificity to lesion formation using focused ultrasound.
Wireless Command-and-Control of UAV-Based Imaging LANs
NASA Technical Reports Server (NTRS)
Herwitz, Stanley; Dunagan, S. E.; Sullivan, D. V.; Slye, R. E.; Leung, J. G.; Johnson, L. F.
2006-01-01
Dual airborne imaging system networks were operated using a wireless line-of-sight telemetry system developed as part of a 2002 unmanned aerial vehicle (UAV) imaging mission over the USA s largest coffee plantation on the Hawaiian island of Kauai. A primary mission objective was the evaluation of commercial-off-the-shelf (COTS) 802.11b wireless technology for reduction of payload telemetry costs associated with UAV remote sensing missions. Predeployment tests with a conventional aircraft demonstrated successful wireless broadband connectivity between a rapidly moving airborne imaging local area network (LAN) and a fixed ground station LAN. Subsequently, two separate LANs with imaging payloads, packaged in exterior-mounted pressure pods attached to the underwing of NASA's Pathfinder-Plus UAV, were operated wirelessly by ground-based LANs over independent Ethernet bridges. Digital images were downlinked from the solar-powered aircraft at data rates of 2-6 megabits per second (Mbps) over a range of 6.5 9.5 km. An integrated wide area network enabled payload monitoring and control through the Internet from a range of ca. 4000 km during parts of the mission. The recent advent of 802.11g technology is expected to boost the system data rate by about a factor of five.
Analysis research for earth resource information systems - Where do we stand
NASA Technical Reports Server (NTRS)
Landgrebe, D. A.
1974-01-01
Discussion of the state of the technology of earth resources information systems relative to future operational implementation. The importance of recognizing the difference between systems with image orientation and systems with numerical orientation is illustrated in an example concerning the effect of noise on multiband multispectral data obtained in an agricultural experiment. It is suggested that the data system hardware portion of the total earth resources information system be designed in terms of a numerical orientation; it is argued, however, that this choise is entirely compatible with image-oriented analysis tasks. Some aspects of interfacing such an advanced technology with an operational user community in such a way as to accommodate the user's need for flexibility and yet provide the services needed on a cost-effective basis are discussed.
Image Processing Using a Parallel Architecture.
1987-12-01
ENG/87D-25 Abstract This study developed a set o± low level image processing tools on a parallel computer that allows concurrent processing of images...environment, the set of tools offers a significant reduction in the time required to perform some commonly used image processing operations. vI IMAGE...step toward developing these systems, a structured set of image processing tools was implemented using a parallel computer. More important than
Improved head-controlled TV system produces high-quality remote image
NASA Technical Reports Server (NTRS)
Goertz, R.; Lindberg, J.; Mingesz, D.; Potts, C.
1967-01-01
Manipulator operator uses an improved resolution tv camera/monitor positioning system to view the remote handling and processing of reactive, flammable, explosive, or contaminated materials. The pan and tilt motions of the camera and monitor are slaved to follow the corresponding motions of the operators head.
DOT National Transportation Integrated Search
2009-06-01
This product updates the prior users manual for Pave-IR to reflect changes in hardware and software made : to accommodate collection of GPS data simultaneously during the collection of thermal profiles. The current : Pave-IR system described in th...
NASA Astrophysics Data System (ADS)
Brook, A.; Cristofani, E.; Vandewal, M.; Matheis, C.; Jonuscheit, J.; Beigang, R.
2012-05-01
The present study proposes a fully integrated, semi-automatic and near real-time mode-operated image processing methodology developed for Frequency-Modulated Continuous-Wave (FMCW) THz images with the center frequencies around: 100 GHz and 300 GHz. The quality control of aeronautics composite multi-layered materials and structures using Non-Destructive Testing is the main focus of this work. Image processing is applied on the 3-D images to extract useful information. The data is processed by extracting areas of interest. The detected areas are subjected to image analysis for more particular investigation managed by a spatial model. Finally, the post-processing stage examines and evaluates the spatial accuracy of the extracted information.
NASA Astrophysics Data System (ADS)
Uchida, T.; Tanaka, H. K. M.; Tanaka, M.
2010-02-01
Cosmic-ray muon radiography is a method that is used to study the internal structure of volcanoes. We have developed a muon radiographic imaging board with a power consumption low enough to be powered by a small solar power system. The imaging board generates an angular distribution of the muons. Used for real-time reading, the method may facilitate the prediction of eruptions. For real-time observations, the Ethernet is employed, and the board works as a web server for a remote operation. The angular distribution can be obtained from a remote PC via a network using a standard web browser. We have collected and analyzed data obtained from a 3-day field study of cosmic-ray muons at a Satsuma-Iwojima volcano. The data provided a clear image of the mountain ridge as a cosmic-ray muon shadow. The measured performance of the system is sufficient for a stand-alone cosmic-ray muon radiography experiment.
Motionless active depth from defocus system using smart optics for camera autofocus applications
NASA Astrophysics Data System (ADS)
Amin, M. Junaid; Riza, Nabeel A.
2016-04-01
This paper describes a motionless active Depth from Defocus (DFD) system design suited for long working range camera autofocus applications. The design consists of an active illumination module that projects a scene illuminating coherent conditioned optical radiation pattern which maintains its sharpness over multiple axial distances allowing an increased DFD working distance range. The imager module of the system responsible for the actual DFD operation deploys an electronically controlled variable focus lens (ECVFL) as a smart optic to enable a motionless imager design capable of effective DFD operation. An experimental demonstration is conducted in the laboratory which compares the effectiveness of the coherent conditioned radiation module versus a conventional incoherent active light source, and demonstrates the applicability of the presented motionless DFD imager design. The fast response and no-moving-parts features of the DFD imager design are especially suited for camera scenarios where mechanical motion of lenses to achieve autofocus action is challenging, for example, in the tiny camera housings in smartphones and tablets. Applications for the proposed system include autofocus in modern day digital cameras.
Improving Performance During Image-Guided Procedures
Duncan, James R.; Tabriz, David
2015-01-01
Objective Image-guided procedures have become a mainstay of modern health care. This article reviews how human operators process imaging data and use it to plan procedures and make intraprocedural decisions. Methods A series of models from human factors research, communication theory, and organizational learning were applied to the human-machine interface that occupies the center stage during image-guided procedures. Results Together, these models suggest several opportunities for improving performance as follows: 1. Performance will depend not only on the operator’s skill but also on the knowledge embedded in the imaging technology, available tools, and existing protocols. 2. Voluntary movements consist of planning and execution phases. Performance subscores should be developed that assess quality and efficiency during each phase. For procedures involving ionizing radiation (fluoroscopy and computed tomography), radiation metrics can be used to assess performance. 3. At a basic level, these procedures consist of advancing a tool to a specific location within a patient and using the tool. Paradigms from mapping and navigation should be applied to image-guided procedures. 4. Recording the content of the imaging system allows one to reconstruct the stimulus/response cycles that occur during image-guided procedures. Conclusions When compared with traditional “open” procedures, the technology used during image-guided procedures places an imaging system and long thin tools between the operator and the patient. Taking a step back and reexamining how information flows through an imaging system and how actions are conveyed through human-machine interfaces suggest that much can be learned from studying system failures. In the same way that flight data recorders revolutionized accident investigations in aviation, much could be learned from recording video data during image-guided procedures. PMID:24921628
NASA Astrophysics Data System (ADS)
Janet, J.; Natesan, T. R.; Santhosh, Ramamurthy; Ibramsha, Mohideen
2005-02-01
An intelligent decision support tool to the Radiologist in telemedicine is described. Medical prescriptions are given based on the images of cyst that has been transmitted over computer networks to the remote medical center. The digital image, acquired by sonography, is converted into an intensity image. This image is then subjected to image preprocessing which involves correction methods to eliminate specific artifacts. The image is resized into a 256 x 256 matrix by using bilinear interpolation method. The background area is detected using distinct block operation. The area of the cyst is calculated by removing the background area from the original image. Boundary enhancement and morphological operations are done to remove unrelated pixels. This gives us the cyst volume. This segmented image of the cyst is sent to the remote medical center for analysis by Knowledge based artificial Intelligent Decision Support System (KIDSS). The type of cyst is detected and reported to the control mechanism of KIDSS. Then the inference engine compares this with the knowledge base and gives appropriate medical prescriptions or treatment recommendations by applying reasoning mechanisms at the remote medical center.
The JPL/KSC telerobotic inspection demonstration
NASA Technical Reports Server (NTRS)
Mittman, David; Bon, Bruce; Collins, Carol; Fleischer, Gerry; Litwin, Todd; Morrison, Jack; Omeara, Jacquie; Peters, Stephen; Brogdon, John; Humeniuk, Bob
1990-01-01
An ASEA IRB90 robotic manipulator with attached inspection cameras was moved through a Space Shuttle Payload Assist Module (PAM) Cradle under computer control. The Operator and Operator Control Station, including graphics simulation, gross-motion spatial planning, and machine vision processing, were located at JPL. The Safety and Support personnel, PAM Cradle, IRB90, and image acquisition system, were stationed at the Kennedy Space Center (KSC). Images captured at KSC were used both for processing by a machine vision system at JPL, and for inspection by the JPL Operator. The system found collision-free paths through the PAM Cradle, demonstrated accurate knowledge of the location of both objects of interest and obstacles, and operated with a communication delay of two seconds. Safe operation of the IRB90 near Shuttle flight hardware was obtained both through the use of a gross-motion spatial planner developed at JPL using artificial intelligence techniques, and infrared beams and pressure sensitive strips mounted to the critical surfaces of the flight hardward at KSC. The Demonstration showed that telerobotics is effective for real tasks, safe for personnel and hardware, and highly productive and reliable for Shuttle payload operations and Space Station external operations.
Disaster-hardened imaging POD for PACS
NASA Astrophysics Data System (ADS)
Honeyman-Buck, Janice; Frost, Meryll
2005-04-01
After the events of 9/11, many people questioned their ability to keep critical services operational in the face of massive infrastructure failure. Hospitals increased their backup and recovery power, made plans for emergency water and food, and operated on a heightened alert awareness with more frequent disaster drills. In a film-based radiology department, if a portable X-ray unit, a CT unit, an Ultrasound unit, and an film processor could be operated on emergency power, a limited, but effective number of studies could be performed. However, in a digital department, there is a reliance on the network infrastructure to deliver images to viewing locations. The system developed for our institution uses several imaging PODS, a name we chose because it implied to us a safe, contained environment. Each POD is a stand-alone emergency powered network capable of generating images and displaying them in the POD or printing them to a DICOM printer. The technology we used to create a POD consists of a computer with dual network interface cards joining our private, local POD network, to the hospital network. In the case of an infrastructure failure, each POD can and does work independently to produce CTs, CRs, and Ultrasounds. The system has been tested during disaster drills and works correctly, producing images using equipment technologists are comfortable using with very few emergency switch-over tasks. Purpose: To provide imaging capabilities in the event of a natural or man-made disaster with infrastructure failure. Method: After the events of 9/11, many people questioned their ability to keep critical services operational in the face of massive infrastructure failure. Hospitals increased their backup and recovery power, made plans for emergency water and food, and operated on a heightened alert awareness with more frequent disaster drills. In a film-based radiology department, if a portable X-ray unit, a CT unit, an Ultrasound unit, and an film processor could be operated on emergency power, a limited, but effective number of studies could be performed. However, in a digital department, there is a reliance on the network infrastructure to deliver images to viewing locations. The system developed for our institution uses several imaging PODS, a name we chose because it implied to us a safe, contained environment. Each POD is on both the standard and the emergency power systems. All the vendor equipment that produces images is on a private, stand-alone network controlled either by a simple or a managed switch. Included in each POD is a dry-process DICOM printer that is rarely used during normal operations and a display workstation. One node on the private network is a PACS application processor (AP) with two network interface cards, one for the private network, one for the standard PACS network. During ordinary daily operations, all acquired images pass through this AP and are routed to the PACS archives, web servers, and workstations. However, if the power and network to much of the hospital were to fail, the stand-alone POD could still function. Images are routed to the AP, but cannot forward to the main network. However, they can be routed to the printer and display in the POD. They are also stored on the AP to continue normal routing when the infrastructure is restored. Results: The imaging PODS have been tested in actual disaster testing where the infrastructure was intentionally removed and worked as designed. To date, we have not had to use them in a real-life scenario and we hope we never do, but we feel we have a reasonable level of emergency imaging capability if we ever need it. Conclusions: Our testing indicates our PODS are a viable way to continue medical imaging in the face of an emergency with a major part of our network and electrical infrastructure destroyed.
Gröbe, Alexander; Weber, Christoph; Schmelzle, Rainer; Heiland, Max; Klatt, Jan; Pohlenz, Philipp
2009-09-01
Gunshot wounds are a rare occurrence during times of peace. The removal of projectiles is recommended; in some cases, however, this is a controversy. The reproduction of a projectile image can be difficult if it is not adjacent to an anatomical landmark. Therefore, navigation systems give the surgeon continuous real-time orientation intraoperatively. The aim of this study was to report our experiences for image-guided removal of projectiles and the resulting intra- and postoperative complications. We investigated 50 patients retrospectively; 32 had image-guided surgical removal of projectiles in the oral and maxillofacial region. Eighteen had surgical removal of projectiles without navigation assistance. There was a significant correlation (p = 0.0136) between the navigated surgery vs. not-navigated surgery and complication rate, including major bleeding (n = 4 vs. n = 1, 8% vs. 2%), soft tissue infections (n = 7 vs. n = 2, 14% vs. 4%), and nerval damage (n = 2 vs. n = 0, 4% vs. 0%; p = 0.038) and between the operating time and postoperative complications. A high tendency between operating time and navigated surgery (p = 0.1103) was shown. When using navigation system, we could reduce operating time. In conclusion, there is a significant correlation between reduced intra- and postoperative complications, including wound infections, nerval damage, and major bleeding, and the appropriate use of a navigation system. In all these cases, we could present reduced operating time. Cone-beam computed tomography plays an important role in detecting projectiles or metallic foreign bodies intraoperatively.
Towards simultaneous single emission microscopy and magnetic resonance imaging
NASA Astrophysics Data System (ADS)
Cai, Liang
In recent years, the combined nuclear imaging and magnetic resonance imaging (MRI) has drawn extensive research effort. They can provide simultaneously acquired anatomical and functional information inside the human/small animal body in vivo. In this dissertation, the development of an ultrahigh resolution MR-compatible SPECT (Single Photon Emission Computed Tomography) system that can be operated inside a pre-existing clinical MR scanner for simultaneous dual-modality imaging of small animals will be discussed. This system is constructed with 40 small pixel CdTe detector modules assembled in a fully stationary ring SPECT geometry. Series of experiments have demonstrated that this system is capable of providing an imaging resolution of <500?m, when operated inside MR scanners. The ultrahigh resolution MR-compatible SPECT system is built around a small pixel CdTe detector module that we recently developed. Each module consists of CdTe detectors having an overall size of 2.2 cm x 1.1 cm, divided into 64 x 32 pixels of 350 mum in size. A novel hybrid pixel-waveform (HPWF) readout system is also designed to alleviate several challenges for using small-pixel CdTe detectors in ultrahigh-resolution SPECT imaging applications. The HPWF system utilizes a modified version of a 2048-channel 2-D CMOS ASIC to readout the anode pixel, and a digitizing circuitry to sample the signal waveform induced on the cathode. The cathode waveform acquired with the HPWF circuitry offers excellent spatial resolution, energy resolution and depth of interaction (DOI) information, even with the presence of excessive charge-sharing/charge-loss between the small anode pixels. The HPWF CdTe detector is designed and constructed with a minimum amount of ferromagnetic materials, to ensure the MR-compatibility. To achieve sub-500?m imaging resolution, two special designed SPECT apertures have been constructed with different pinhole sizes of 300?m and 500?m respectively. It has 40 pinhole inserts that are made of cast platinum (90%)-iridium (10%) alloy, which provides the maximum stopping power and are compatible with MR scanners. The SPECT system is installed on a non-metal gantry constructed with 3-D printing using nylon powder material. This compact system can work as a "low-cost" desktop ultrahigh resolution SPECT system. It can also be directly operated inside an MR scanner. Accurate system geometrical calibration and corresponding image reconstruction methods for the MRC-SPECT system is developed. In order to account for the magnetic field induced distortion in the SPECT image, a comprehensive charge collection model inside strong magnetic field is adopted to produce high resolution SPECT image inside MR scanner.
Acousto-optic laser projection systems for displaying TV information
NASA Astrophysics Data System (ADS)
Gulyaev, Yu V.; Kazaryan, M. A.; Mokrushin, Yu M.; Shakin, O. V.
2015-04-01
This review addresses various approaches to television projection imaging on large screens using lasers. Results are presented of theoretical and experimental studies of an acousto-optic projection system operating on the principle of projecting an image of an entire amplitude-modulated television line in a single laser pulse. We consider characteristic features of image formation in such a system and the requirements for its individual components. Particular attention is paid to nonlinear distortions of the image signal, which show up most severely at low modulation signal frequencies. We discuss the feasibility of improving the process efficiency and image quality using acousto-optic modulators and pulsed lasers. Real-time projectors with pulsed line imaging can be used for controlling high-intensity laser radiation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demos, Stavros; Levenson, Richard
The present disclosure relates to a method for analyzing tissue specimens. In one implementation the method involves obtaining a tissue sample and exposing the sample to one or more fluorophores as contrast agents to enhance contrast of subcellular compartments of the tissue sample. The tissue sample is illuminated by an ultraviolet (UV) light having a wavelength between about 200 nm to about 400 nm, with the wavelength being selected to result in penetration to only a specified depth below a surface of the tissue sample. Inter-image operations between images acquired under different imaging parameters allow for improvement of the imagemore » quality via removal of unwanted image components. A microscope may be used to image the tissue sample and provide the image to an image acquisition system that makes use of a camera. The image acquisition system may create a corresponding image that is transmitted to a display system for processing and display.« less
PScan 1.0: flexible software framework for polygon based multiphoton microscopy
NASA Astrophysics Data System (ADS)
Li, Yongxiao; Lee, Woei Ming
2016-12-01
Multiphoton laser scanning microscopes exhibit highly localized nonlinear optical excitation and are powerful instruments for in-vivo deep tissue imaging. Customized multiphoton microscopy has a significantly superior performance for in-vivo imaging because of precise control over the scanning and detection system. To date, there have been several flexible software platforms catered to custom built microscopy systems i.e. ScanImage, HelioScan, MicroManager, that perform at imaging speeds of 30-100fps. In this paper, we describe a flexible software framework for high speed imaging systems capable of operating from 5 fps to 1600 fps. The software is based on the MATLAB image processing toolbox. It has the capability to communicate directly with a high performing imaging card (Matrox Solios eA/XA), thus retaining high speed acquisition. The program is also designed to communicate with LabVIEW and Fiji for instrument control and image processing. Pscan 1.0 can handle high imaging rates and contains sufficient flexibility for users to adapt to their high speed imaging systems.
Demongeot, Jacques; Fouquet, Yannick; Tayyab, Muhammad; Vuillerme, Nicolas
2009-01-01
Background Dynamical systems like neural networks based on lateral inhibition have a large field of applications in image processing, robotics and morphogenesis modeling. In this paper, we will propose some examples of dynamical flows used in image contrasting and contouring. Methodology First we present the physiological basis of the retina function by showing the role of the lateral inhibition in the optical illusions and pathologic processes generation. Then, based on these biological considerations about the real vision mechanisms, we study an enhancement method for contrasting medical images, using either a discrete neural network approach, or its continuous version, i.e. a non-isotropic diffusion reaction partial differential system. Following this, we introduce other continuous operators based on similar biomimetic approaches: a chemotactic contrasting method, a viability contouring algorithm and an attentional focus operator. Then, we introduce the new notion of mixed potential Hamiltonian flows; we compare it with the watershed method and we use it for contouring. Conclusions We conclude by showing the utility of these biomimetic methods with some examples of application in medical imaging and computed assisted surgery. PMID:19547712
Technology study of quantum remote sensing imaging
NASA Astrophysics Data System (ADS)
Bi, Siwen; Lin, Xuling; Yang, Song; Wu, Zhiqiang
2016-02-01
According to remote sensing science and technology development and application requirements, quantum remote sensing is proposed. First on the background of quantum remote sensing, quantum remote sensing theory, information mechanism, imaging experiments and prototype principle prototype research situation, related research at home and abroad are briefly introduced. Then we expounds compress operator of the quantum remote sensing radiation field and the basic principles of single-mode compression operator, quantum quantum light field of remote sensing image compression experiment preparation and optical imaging, the quantum remote sensing imaging principle prototype, Quantum remote sensing spaceborne active imaging technology is brought forward, mainly including quantum remote sensing spaceborne active imaging system composition and working principle, preparation and injection compression light active imaging device and quantum noise amplification device. Finally, the summary of quantum remote sensing research in the past 15 years work and future development are introduced.
Integrated imaging sensor systems with CMOS active pixel sensor technology
NASA Technical Reports Server (NTRS)
Yang, G.; Cunningham, T.; Ortiz, M.; Heynssens, J.; Sun, C.; Hancock, B.; Seshadri, S.; Wrigley, C.; McCarty, K.; Pain, B.
2002-01-01
This paper discusses common approaches to CMOS APS technology, as well as specific results on the five-wire programmable digital camera-on-a-chip developed at JPL. The paper also reports recent research in the design, operation, and performance of APS imagers for several imager applications.
A Novel 24 GHz One-Shot, Rapid and Portable Microwave Imaging System
NASA Technical Reports Server (NTRS)
Ghasr, M. T.; Abou-Khousa, M. A.; Kharkovsky, S.; Zoughi, R.; Pommerenke, D.
2008-01-01
Development of microwave and millimeter wave imaging systems has received significant attention in the past decade. Signals at these frequencies penetrate inside of dielectric materials and have relatively small wavelengths. Thus. imaging systems at these frequencies can produce images of the dielectric and geometrical distributions of objects. Although there are many different approaches for imaging at these frequencies. they each have their respective advantageous and limiting features (hardware. reconstruction algorithms). One method involves electronically scanning a given spatial domain while recording the coherent scattered field distribution from an object. Consequently. different reconstruction or imaging techniques may be used to produce an image (dielectric distribution and geometrical features) of the object. The ability to perform this accuratev and fast can lead to the development of a rapid imaging system that can be used in the same manner as a video camera. This paper describes the design of such a system. operating at 2-1 GHz. using modulated scatterer technique applied to 30 resonant slots in a prescribed measurement domain.
An arc control and protection system for the JET lower hybrid antenna based on an imaging system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Figueiredo, J., E-mail: joao.figueiredo@jet.efda.org; Mailloux, J.; Kirov, K.
Arcs are the potentially most dangerous events related to Lower Hybrid (LH) antenna operation. If left uncontrolled they can produce damage and cause plasma disruption by impurity influx. To address this issue an arc real time control and protection imaging system for the Joint European Torus (JET) LH antenna has been implemented. The LH system is one of the additional heating systems at JET. It comprises 24 microwave generators (klystrons, operating at 3.7 GHz) providing up to 5 MW of heating and current drive to the JET plasma. This is done through an antenna composed of an array of waveguidesmore » facing the plasma. The protection system presented here is based primarily on an imaging arc detection and real time control system. It has adapted the ITER like wall hotspot protection system using an identical CCD camera and real time image processing unit. A filter has been installed to avoid saturation and spurious system triggers caused by ionization light. The antenna is divided in 24 Regions Of Interest (ROIs) each one corresponding to one klystron. If an arc precursor is detected in a ROI, power is reduced locally with subsequent potential damage and plasma disruption avoided. The power is subsequently reinstated if, during a defined interval of time, arcing is confirmed not to be present by image analysis. This system was successfully commissioned during the restart phase and beginning of the 2013 scientific campaign. Since its installation and commissioning, arcs and related phenomena have been prevented. In this contribution we briefly describe the camera, image processing, and real time control systems. Most importantly, we demonstrate that an LH antenna arc protection system based on CCD camera imaging systems works. Examples of both controlled and uncontrolled LH arc events and their consequences are shown.« less
Multimodal adaptive optics for depth-enhanced high-resolution ophthalmic imaging
NASA Astrophysics Data System (ADS)
Hammer, Daniel X.; Mujat, Mircea; Iftimia, Nicusor V.; Lue, Niyom; Ferguson, R. Daniel
2010-02-01
We developed a multimodal adaptive optics (AO) retinal imager for diagnosis of retinal diseases, including glaucoma, diabetic retinopathy (DR), age-related macular degeneration (AMD), and retinitis pigmentosa (RP). The development represents the first ever high performance AO system constructed that combines AO-corrected scanning laser ophthalmoscopy (SLO) and swept source Fourier domain optical coherence tomography (SSOCT) imaging modes in a single compact clinical prototype platform. The SSOCT channel operates at a wavelength of 1 μm for increased penetration and visualization of the choriocapillaris and choroid, sites of major disease activity for DR and wet AMD. The system is designed to operate on a broad clinical population with a dual deformable mirror (DM) configuration that allows simultaneous low- and high-order aberration correction. The system also includes a wide field line scanning ophthalmoscope (LSO) for initial screening, target identification, and global orientation; an integrated retinal tracker (RT) to stabilize the SLO, OCT, and LSO imaging fields in the presence of rotational eye motion; and a high-resolution LCD-based fixation target for presentation to the subject of stimuli and other visual cues. The system was tested in a limited number of human subjects without retinal disease for performance optimization and validation. The system was able to resolve and quantify cone photoreceptors across the macula to within ~0.5 deg (~100-150 μm) of the fovea, image and delineate ten retinal layers, and penetrate to resolve targets deep into the choroid. In addition to instrument hardware development, analysis algorithms were developed for efficient information extraction from clinical imaging sessions, with functionality including automated image registration, photoreceptor counting, strip and montage stitching, and segmentation. The system provides clinicians and researchers with high-resolution, high performance adaptive optics imaging to help guide therapies, develop new drugs, and improve patient outcomes.
Using a high-definition stereoscopic video system to teach microscopic surgery
NASA Astrophysics Data System (ADS)
Ilgner, Justus; Park, Jonas Jae-Hyun; Labbé, Daniel; Westhofen, Martin
2007-02-01
Introduction: While there is an increasing demand for minimally invasive operative techniques in Ear, Nose and Throat surgery, these operations are difficult to learn for junior doctors and demanding to supervise for experienced surgeons. The motivation for this study was to integrate high-definition (HD) stereoscopic video monitoring in microscopic surgery in order to facilitate teaching interaction between senior and junior surgeon. Material and methods: We attached a 1280x1024 HD stereo camera (TrueVisionSystems TM Inc., Santa Barbara, CA, USA) to an operating microscope (Zeiss ProMagis, Zeiss Co., Oberkochen, Germany), whose images were processed online by a PC workstation consisting of a dual Intel® Xeon® CPU (Intel Co., Santa Clara, CA). The live image was displayed by two LCD projectors @ 1280x768 pixels on a 1,25m rear-projection screen by polarized filters. While the junior surgeon performed the surgical procedure based on the displayed stereoscopic image, all other participants (senior surgeon, nurse and medical students) shared the same stereoscopic image from the screen. Results: With the basic setup being performed only once on the day before surgery, fine adjustments required about 10 minutes extra during the operation schedule, which fitted into the time interval between patients and thus did not prolong operation times. As all relevant features of the operative field were demonstrated on one large screen, four major effects were obtained: A) Stereoscopy facilitated orientation for the junior surgeon as well as for medical students. B) The stereoscopic image served as an unequivocal guide for the senior surgeon to demonstrate the next surgical steps to the junior colleague. C) The theatre nurse shared the same image, anticipating the next instruments which were needed. D) Medical students instantly share the information given by all staff and the image, thus avoiding the need for an extra teaching session. Conclusion: High definition stereoscopy bears the potential to compress the learning curve for undergraduate as well as postgraduate medical professionals in minimally invasive surgery. Further studies will focus on the long term effect for operative training as well as on post-processing of HD stereoscopy video content for off-line interactive medical education.
Concept of electro-optical sensor module for sniper detection system
NASA Astrophysics Data System (ADS)
Trzaskawka, Piotr; Dulski, Rafal; Kastek, Mariusz
2010-10-01
The paper presents an initial concept of the electro-optical sensor unit for sniper detection purposes. This unit, comprising of thermal and daylight cameras, can operate as a standalone device but its primary application is a multi-sensor sniper and shot detection system. Being a part of a larger system it should contribute to greater overall system efficiency and lower false alarm rate thanks to data and sensor fusion techniques. Additionally, it is expected to provide some pre-shot detection capabilities. Generally acoustic (or radar) systems used for shot detection offer only "after-the-shot" information and they cannot prevent enemy attack, which in case of a skilled sniper opponent usually means trouble. The passive imaging sensors presented in this paper, together with active systems detecting pointed optics, are capable of detecting specific shooter signatures or at least the presence of suspected objects in the vicinity. The proposed sensor unit use thermal camera as a primary sniper and shot detection tool. The basic camera parameters such as focal plane array size and type, focal length and aperture were chosen on the basis of assumed tactical characteristics of the system (mainly detection range) and current technology level. In order to provide costeffective solution the commercially available daylight camera modules and infrared focal plane arrays were tested, including fast cooled infrared array modules capable of 1000 fps image acquisition rate. The daylight camera operates as a support, providing corresponding visual image, easier to comprehend for a human operator. The initial assumptions concerning sensor operation were verified during laboratory and field test and some example shot recording sequences are presented.
[Experimental study of angiography using vascular interventional robot-2(VIR-2)].
Tian, Zeng-min; Lu, Wang-sheng; Liu, Da; Wang, Da-ming; Guo, Shu-xiang; Xu, Wu-yi; Jia, Bo; Zhao, De-peng; Liu, Bo; Gao, Bao-feng
2012-06-01
To verify the feasibility and safety of new vascular interventional robot system used in vascular interventional procedures. Vascular interventional robot type-2 (VIR-2) included master-slave parts of body propulsion system, image navigation systems and force feedback system, the catheter movement could achieve under automatic control and navigation, force feedback was integrated real-time, followed by in vitro pre-test in vascular model and cerebral angiography in dog. Surgeon controlled vascular interventional robot remotely, the catheter was inserted into the intended target, the catheter positioning error and the operation time would be evaluated. In vitro pre-test and animal experiment went well; the catheter can enter any branch of vascular. Catheter positioning error was less than 1 mm. The angiography operation in animal was carried out smoothly without complication; the success rate of the operation was 100% and the entire experiment took 26 and 30 minutes, efficiency was slightly improved compared with the VIR-1, and the time what staff exposed to the DSA machine was 0 minute. The resistance of force sensor can be displayed to the operator to provide a security guarantee for the operation. No surgical complications. VIR-2 is safe and feasible, and can achieve the catheter remote operation and angiography; the master-slave system meets the characteristics of traditional procedure. The three-dimensional image can guide the operation more smoothly; force feedback device provides remote real-time haptic information to provide security for the operation.
Infrared Imaging Sharpens View in Critical Situations
NASA Technical Reports Server (NTRS)
2007-01-01
Innovative Engineering and Consulting (IEC) Infrared Systems, a leading developer of thermal imaging systems and night vision equipment, received a Glenn Alliance for Technology Exchange (GATE) award, half of which was in the form of additional NASA assistance for new product development. IEC Infrared Systems worked with electrical and optical engineers from Glenn's Diagnostics and Data Systems Branch to develop a commercial infrared imaging system that could differentiate the intensity of heat sources better than other commercial systems. The research resulted in two major thermal imaging solutions: NightStalkIR and IntrudIR Alert. These systems are being used in the United States and abroad to help locate personnel stranded in emergency situations, defend soldiers on the battlefield abroad, and protect high-value facilities and operations. The company is also applying its advanced thermal imaging techniques to medical and pharmaceutical product development with a Cleveland-based pharmaceutical company.
Algorithms for High-Speed Noninvasive Eye-Tracking System
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Morookian, John-Michael; Lambert, James
2010-01-01
Two image-data-processing algorithms are essential to the successful operation of a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. The system was described in High-Speed Noninvasive Eye-Tracking System (NPO-30700) NASA Tech Briefs, Vol. 31, No. 8 (August 2007), page 51. To recapitulate from the cited article: Like prior commercial noninvasive eyetracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Most of the prior commercial noninvasive eyetracking systems rely on standard video cameras, which operate at frame rates of about 30 Hz. Such systems are limited to slow, full-frame operation. The video camera in the present system includes a charge-coupled-device (CCD) image detector plus electronic circuitry capable of implementing an advanced control scheme that effects readout from a small region of interest (ROI), or subwindow, of the full image. Inasmuch as the image features of interest (the cornea and pupil) typically occupy a small part of the camera frame, this ROI capability can be exploited to determine the direction of gaze at a high frame rate by reading out from the ROI that contains the cornea and pupil (but not from the rest of the image) repeatedly. One of the present algorithms exploits the ROI capability. The algorithm takes horizontal row slices and takes advantage of the symmetry of the pupil and cornea circles and of the gray-scale contrasts of the pupil and cornea with respect to other parts of the eye. The algorithm determines which horizontal image slices contain the pupil and cornea, and, on each valid slice, the end coordinates of the pupil and cornea. Information from multiple slices is then combined to robustly locate the centroids of the pupil and cornea images. The other of the two present algorithms is a modified version of an older algorithm for estimating the direction of gaze from the centroids of the pupil and cornea. The modification lies in the use of the coordinates of the centroids, rather than differences between the coordinates of the centroids, in a gaze-mapping equation. The equation locates a gaze point, defined as the intersection of the gaze axis with a surface of interest, which is typically a computer display screen (see figure). The expected advantage of the modification is to make the gaze computation less dependent on some simplifying assumptions that are sometimes not accurate
Computer Sciences and Data Systems, volume 1
NASA Technical Reports Server (NTRS)
1987-01-01
Topics addressed include: software engineering; university grants; institutes; concurrent processing; sparse distributed memory; distributed operating systems; intelligent data management processes; expert system for image analysis; fault tolerant software; and architecture research.
Integrating Space Systems Operations at the Marine Expeditionary Force Level
2015-06-01
Electromagnetic Interference ENVI Environment for Visualizing Images EW Electronic Warfare xvi FA40 Space Operations Officer FEC Fires and Effects...Information Facility SFE Space Force Enhancement SIGINT Signals Intelligence SSA Space Situational Awareness SSE Space Support Element STK Systems...April 23, 2015. 65 • GPS Interference and Navigation Tool (GIANT) for providing GPS accuracy prediction reports • Systems Toolkit ( STK ) Analysis
Final Report: Non-Visible, Automated Target Acquisition and Tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ziock, Klaus-Peter; Fabris, Lorenzo; Goddard, James K.
The Roadside Tracker (RST) represents a new approach to radiation portal monitors. It uses a combination of gamma-ray and visible-light imaging to localize gamma-ray radiation sources to individual vehicles in free-flowing, multi-lane traffic. Deployed as two trailers that are parked on either side of the roadway (Fig. 1); the RST scans passing traffic with two large gamma-ray imagers, one mounted in each trailer. The system compensates for vehicle motion through the imager’s fields of view by using automated target acquisition and tracking (TAT) software applied to a stream of video images. Once a vehicle has left the field of view,more » the radiation image of that vehicle is analyzed for the presence of a source, and if one is found, an alarm is sounded. The gamma-ray image is presented to the operator together with the video image of the traffic stream when the vehicle was approximately closest to the system (Fig. 2). The offending vehicle is identified with a bounding box to distinguish it from other vehicles that might be present at the same time. The system was developed under a previous grant from the Department of Homeland Security’s (DHS’s) Domestic Nuclear Detection Office (DNDO). This report documents work performed with follow-on funding from DNDO to further advance the development of the RST. Specifically, the primary thrust was to extend the performance envelope of the system by replacing the visible-light video cameras used by the TAT software with sensors that would allow operation at night and during inclement weather. In particular, it was desired to allow operation after dark without requiring external lighting. As part of this work, the system software was also upgraded to allow the use of 64-bit computers, the current generation operating system (OS), software development environment (Windows 7 vs. Windows XP, and current Visual Studio.Net), and improved software version controls (GIT vs. Source Safe.) With the upgraded performance allowed by new computers, and the additional memory available in a 64-bit OS, the system was able to handle greater traffic densities, and this also allowed addition of the ability to handle stop-and-go traffic.« less
Augmented reality in neurovascular surgery: feasibility and first uses in the operating room.
Kersten-Oertel, Marta; Gerard, Ian; Drouin, Simon; Mok, Kelvin; Sirhan, Denis; Sinclair, David S; Collins, D Louis
2015-11-01
The aim of this report is to present a prototype augmented reality (AR) intra-operative brain imaging system. We present our experience of using this new neuronavigation system in neurovascular surgery and discuss the feasibility of this technology for aneurysms, arteriovenous malformations (AVMs), and arteriovenous fistulae (AVFs). We developed an augmented reality system that uses an external camera to capture the live view of the patient on the operating room table and to merge this view with pre-operative volume-rendered vessels. We have extensively tested the system in the laboratory and have used the system in four surgical cases: one aneurysm, two AVMs and one AVF case. The developed AR neuronavigation system allows for precise patient-to-image registration and calibration of the camera, resulting in a well-aligned augmented reality view. Initial results suggest that augmented reality is useful for tailoring craniotomies, localizing vessels of interest, and planning resection corridors. Augmented reality is a promising technology for neurovascular surgery. However, for more complex anomalies such as AVMs and AVFs, better visualization techniques that allow one to distinguish between arteries and veins and determine the absolute depth of a vessel of interest are needed.
Space Technology - Game Changing Development NASA Facts: Autonomous Medical Operations
NASA Technical Reports Server (NTRS)
Thompson, David E.
2018-01-01
The AMO (Autonomous Medical Operations) Project is working extensively to train medical models on the reliability and confidence of computer-aided interpretation of ultrasound images in various clinical settings, and of various anatomical structures. AI (Artificial Intelligence) algorithms recognize and classify features in the ultrasound images, and these are compared to those features that clinicians use to diagnose diseases. The acquisition of clinically validated image assessment and the use of the AI algorithms constitutes fundamental baseline for a Medical Decision Support System that will advise crew on long-duration, remote missions.
Utilization of the Space Vision System as an Augmented Reality System For Mission Operations
NASA Technical Reports Server (NTRS)
Maida, James C.; Bowen, Charles
2003-01-01
Augmented reality is a technique whereby computer generated images are superimposed on live images for visual enhancement. Augmented reality can also be characterized as dynamic overlays when computer generated images are registered with moving objects in a live image. This technique has been successfully implemented, with low to medium levels of registration precision, in an NRA funded project entitled, "Improving Human Task Performance with Luminance Images and Dynamic Overlays". Future research is already being planned to also utilize a laboratory-based system where more extensive subject testing can be performed. However successful this might be, the problem will still be whether such a technology can be used with flight hardware. To answer this question, the Canadian Space Vision System (SVS) will be tested as an augmented reality system capable of improving human performance where the operation requires indirect viewing. This system has already been certified for flight and is currently flown on each shuttle mission for station assembly. Successful development and utilization of this system in a ground-based experiment will expand its utilization for on-orbit mission operations. Current research and development regarding the use of augmented reality technology is being simulated using ground-based equipment. This is an appropriate approach for development of symbology (graphics and annotation) optimal for human performance and for development of optimal image registration techniques. It is anticipated that this technology will become more pervasive as it matures. Because we know what and where almost everything is on ISS, this reduces the registration problem and improves the computer model of that reality, making augmented reality an attractive tool, provided we know how to use it. This is the basis for current research in this area. However, there is a missing element to this process. It is the link from this research to the current ISS video system and to flight hardware capable of utilizing this technology. This is the basis for this proposed Space Human Factors Engineering project, the determination of the display symbology within the performance limits of the Space Vision System that will objectively improve human performance. This utilization of existing flight hardware will greatly reduce the costs of implementation for flight. Besides being used onboard shuttle and space station and as a ground-based system for mission operational support, it also has great potential for science and medical training and diagnostics, remote learning, team learning, video/media conferencing, and educational outreach.
A spatial registration method for navigation system combining O-arm with spinal surgery robot
NASA Astrophysics Data System (ADS)
Bai, H.; Song, G. L.; Zhao, Y. W.; Liu, X. Z.; Jiang, Y. X.
2018-05-01
The minimally invasive surgery in spinal surgery has become increasingly popular in recent years as it reduces the chances of complications during post-operation. However, the procedure of spinal surgery is complicated and the surgical vision of minimally invasive surgery is limited. In order to increase the quality of percutaneous pedicle screw placement, the O-arm that is a mobile intraoperative imaging system is used to assist surgery. The robot navigation system combined with O-arm is also increasing, with the extensive use of O-arm. One of the major problems in the surgical navigation system is to associate the patient space with the intra-operation image space. This study proposes a spatial registration method of spinal surgical robot navigation system, which uses the O-arm to scan a calibration phantom with metal calibration spheres. First, the metal artifacts were reduced in the CT slices and then the circles in the images based on the moments invariant could be identified. Further, the position of the calibration sphere in the image space was obtained. Moreover, the registration matrix is obtained based on the ICP algorithm. Finally, the position error is calculated to verify the feasibility and accuracy of the registration method.
NASA Astrophysics Data System (ADS)
Mehring, James W.; Thomas, Scott D.
1995-11-01
The Data Services Segment of the Defense Mapping Agency's Digital Production System provides a digital archive of imagery source data for use by DMA's cartographic user's. This system was developed in the mid-1980's and is currently undergoing modernization. This paper addresses the modernization of the imagery buffer function that was performed by custom hardware in the baseline system and is being replaced by a RAID Server based on commercial off the shelf (COTS) hardware. The paper briefly describes the baseline DMA image system and the modernization program, that is currently under way. Throughput benchmark measurements were made to make design configuration decisions for a commercial off the shelf (COTS) RAID Server to perform as system image buffer. The test program began with performance measurements of the RAID read and write operations between the RAID arrays and the server CPU for RAID levels 0, 5 and 0+1. Interface throughput measurements were made for the HiPPI interface between the RAID Server and the image archive and processing system as well as the client side interface between a custom interface board that provides the interface between the internal bus of the RAID Server and the Input- Output Processor (IOP) external wideband network currently in place in the DMA system to service client workstations. End to end measurements were taken from the HiPPI interface through the RAID write and read operations to the IOP output interface.
NASA Astrophysics Data System (ADS)
Fennelly, Alphonsus J.; Fry, Edward L.; Zukic, Muamer; Wilson, Michele M.; Janik, Tadeusz J.; Torr, Douglas G.
1994-11-01
In six companion papers we discuss a capability for x-ray tomographic spectrophotometry at three energy ranges to observe foreign objects in various systems using a novel x-ray optical and photometric approach. We describe new types of thin-film x-ray reflecting filters to provide energy-specific optical trains, inserted into existing x-ray interrogation systems. That is complemented by performing topographic imaging at a few, to several, energies in each case. That provides a full topographic and spectrophotometric analysis. Foreign objects can then be detected, localized, discriminated, and classified, so that they may be dealt with by excision, and replacement with benign system elements. We analyze statistical and operational concerns leading to the design of three systems: The first operates at x-ray energies of 1 - 10 keV; it deals with defects in microelectronic integrated circuits. The second operates at x-ray energies of 10 - 30 keV; it deals with the defects in human tissue. The chemical specificity and image resolution of the system will allow identification, localization, and mensuration of tumors without the need of biopsy. The system which we concentrate this discussion on, the third, operates at x- ray energies of 30 - 70 keV; it deals with the presence in transportation systems of explosive devices, and contraband materials and objects in luggage and cargo. We present the analysis of the statistical features of the detection problem in these types of systems, discussing the operational constraints which limits system performance. After considering the multivariate, multisignature, approach to the problem, we discuss the tomographic and spectrophotometric approach to the problem which yields a better solution to the detection problem within the operational constraints.
NASA Astrophysics Data System (ADS)
Serief, Chahira
2017-11-01
Alsat-1B, launched into a 670 km sun-synchronous orbit on board the PSLV launch vehicle from the Sriharikota launch site in India on 26 September 2016, is a medium resolution Earth Observation satellite with a mass of 100 kg. Alsat-1B will be used for agricultural and resource monitoring, disaster management, land use mapping and urban planning. It is based on the SSTL-100 platform, and flies a 24 m multispectral imager and a 12 m panchromatic imager delivering images with a swath width of 140 km. One of the main factors affecting the performance of satellite-borne optical imaging systems is micro-vibration. Micro-vibration is a low level mechanical disturbance inevitably generated from moving parts on a satellite and exceptionally difficult to be controlled by the attitude and orbital control system (AOCS) of a spacecraft. Micro-vibration usually causes problems for optical imaging systems onboard Earth Observation satellites. The major effect of micro-vibration is the excitation of the support structures for the optical elements during imaging operations which can result in severe degradation of image quality by smearing and distortion. Quantitative characterization of image degradation caused by micro-vibration is thus quite useful and important as part of system level analysis which can help preventing micro-vibration influence by proper design and restoring the degraded image. The aim of this work is to provide quantitative estimates of the effect of micro-vibration on the performance of Alsat-1B imager, which may be experienced operationally, in terms of the modulation transfer function (MTF) and based on ground micro-vibration tests results.
Fiber optic spectroscopic digital imaging sensor and method for flame properties monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zelepouga, Serguei A; Rue, David M; Saveliev, Alexei V
2011-03-15
A system for real-time monitoring of flame properties in combustors and gasifiers which includes an imaging fiber optic bundle having a light receiving end and a light output end and a spectroscopic imaging system operably connected with the light output end of the imaging fiber optic bundle. Focusing of the light received by the light receiving end of the imaging fiber optic bundle by a wall disposed between the light receiving end of the fiber optic bundle and a light source, which wall forms a pinhole opening aligned with the light receiving end.
Diagnostic Imaging of the Hepatobiliary System: An Update.
Marolf, Angela J
2017-05-01
Recent advances in diagnostic imaging of the hepatobiliary system include MRI, computed tomography (CT), contrast-enhanced ultrasound, and ultrasound elastography. With the advent of multislice CT scanners, sedated examinations in veterinary patients are feasible, increasing the utility of this imaging modality. CT and MRI provide additional information for dogs and cats with hepatobiliary diseases due to lack of superimposition of structures, operator dependence, and through intravenous contrast administration. Advanced ultrasound methods can offer complementary information to standard ultrasound imaging. These newer imaging modalities assist clinicians by aiding diagnosis, prognostication, and surgical planning. Copyright © 2016 Elsevier Inc. All rights reserved.
A zero-error operational video data compression system
NASA Technical Reports Server (NTRS)
Kutz, R. L.
1973-01-01
A data compression system has been operating since February 1972, using ATS spin-scan cloud cover data. With the launch of ITOS 3 in October 1972, this data compression system has become the only source of near-realtime very high resolution radiometer image data at the data processing facility. The VHRR image data are compressed and transmitted over a 50 kilobit per second wideband ground link. The goal of the data compression experiment was to send data quantized to six bits at twice the rate possible when no compression is used, while maintaining zero error between the transmitted and reconstructed data. All objectives of the data compression experiment were met, and thus a capability of doubling the data throughput of the system has been achieved.
High-sensitivity, high-speed continuous imaging system
Watson, Scott A; Bender, III, Howard A
2014-11-18
A continuous imaging system for recording low levels of light typically extending over small distances with high-frame rates and with a large number of frames is described. Photodiode pixels disposed in an array having a chosen geometry, each pixel having a dedicated amplifier, analog-to-digital convertor, and memory, provide parallel operation of the system. When combined with a plurality of scintillators responsive to a selected source of radiation, in a scintillator array, the light from each scintillator being directed to a single corresponding photodiode in close proximity or lens-coupled thereto, embodiments of the present imaging system may provide images of x-ray, gamma ray, proton, and neutron sources with high efficiency.
Modeling the target acquisition performance of active imaging systems
NASA Astrophysics Data System (ADS)
Espinola, Richard L.; Jacobs, Eddie L.; Halford, Carl E.; Vollmerhausen, Richard; Tofsted, David H.
2007-04-01
Recent development of active imaging system technology in the defense and security community have driven the need for a theoretical understanding of its operation and performance in military applications such as target acquisition. In this paper, the modeling of active imaging systems, developed at the U.S. Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate, is presented with particular emphasis on the impact of coherent effects such as speckle and atmospheric scintillation. Experimental results from human perception tests are in good agreement with the model results, validating the modeling of coherent effects as additional noise sources. Example trade studies on the design of a conceptual active imaging system to mitigate deleterious coherent effects are shown.
Modeling the target acquisition performance of active imaging systems.
Espinola, Richard L; Jacobs, Eddie L; Halford, Carl E; Vollmerhausen, Richard; Tofsted, David H
2007-04-02
Recent development of active imaging system technology in the defense and security community have driven the need for a theoretical understanding of its operation and performance in military applications such as target acquisition. In this paper, the modeling of active imaging systems, developed at the U.S. Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate, is presented with particular emphasis on the impact of coherent effects such as speckle and atmospheric scintillation. Experimental results from human perception tests are in good agreement with the model results, validating the modeling of coherent effects as additional noise sources. Example trade studies on the design of a conceptual active imaging system to mitigate deleterious coherent effects are shown.
Ultrasound image guidance of cardiac interventions
NASA Astrophysics Data System (ADS)
Peters, Terry M.; Pace, Danielle F.; Lang, Pencilla; Guiraudon, Gérard M.; Jones, Douglas L.; Linte, Cristian A.
2011-03-01
Surgical procedures often have the unfortunate side-effect of causing the patient significant trauma while accessing the target site. Indeed, in some cases the trauma inflicted on the patient during access to the target greatly exceeds that caused by performing the therapy. Heart disease has traditionally been treated surgically using open chest techniques with the patient being placed "on pump" - i.e. their circulation being maintained by a cardio-pulmonary bypass or "heart-lung" machine. Recently, techniques have been developed for performing minimally invasive interventions on the heart, obviating the formerly invasive procedures. These new approaches rely on pre-operative images, combined with real-time images acquired during the procedure. Our approach is to register intra-operative images to the patient, and use a navigation system that combines intra-operative ultrasound with virtual models of instrumentation that has been introduced into the chamber through the heart wall. This paper illustrates the problems associated with traditional ultrasound guidance, and reviews the state of the art in real-time 3D cardiac ultrasound technology. In addition, it discusses the implementation of an image-guided intervention platform that integrates real-time ultrasound with a virtual reality environment, bringing together the pre-operative anatomy derived from MRI or CT, representations of tracked instrumentation inside the heart chamber, and the intra-operatively acquired ultrasound images.
Hambright, D; Hellman, M; Barrack, R
2018-01-01
The aims of this study were to examine the rate at which the positioning of the acetabular component, leg length discrepancy and femoral offset are outside an acceptable range in total hip arthroplasties (THAs) which either do or do not involve the use of intra-operative digital imaging. A retrospective case-control study was undertaken with 50 patients before and 50 patients after the integration of an intra-operative digital imaging system in THA. The demographics of the two groups were comparable for body mass index, age, laterality and the indication for surgery. The digital imaging group had more men than the group without. Surgical data and radiographic parameters, including the inclination and anteversion of the acetabular component, leg length discrepancy, and the difference in femoral offset compared with the contralateral hip were collected and compared, as well as the incidence of altering the position of a component based on the intra-operative image. Digital imaging took a mean of five minutes (2.3 to 14.6) to perform. Intra-operative changes with the use of digital imaging were made for 43 patients (86%), most commonly to adjust leg length and femoral offset. There was a decrease in the incidence of outliers when using intra-operative imaging compared with not using it in regard to leg length discrepancy (20% versus 52%, p = 0.001) and femoral offset inequality (18% versus 44%, p = 0.004). There was also a difference in the incidence of outliers in acetabular inclination (0% versus 7%, p = 0.023) and version (0% versus 4%, p = 0.114) compared with historical results of a high-volume surgeon at the same centre. The use of intra-operative digital imaging in THA improves the accuracy of the positioning of the components at THA without adding a substantial amount of time to the operation. Cite this article: Bone Joint J 2018;100B(1 Supple A):36-43. ©2018 The British Editorial Society of Bone & Joint Surgery.
Fluorescence guided lymph node biopsy in large animals using direct image projection device
NASA Astrophysics Data System (ADS)
Ringhausen, Elizabeth; Wang, Tylon; Pitts, Jonathan; Akers, Walter J.
2016-03-01
The use of fluorescence imaging for aiding oncologic surgery is a fast growing field in biomedical imaging, revolutionizing open and minimally invasive surgery practices. We have designed, constructed, and tested a system for fluorescence image acquisition and direct display on the surgical field for fluorescence guided surgery. The system uses a near-infrared sensitive CMOS camera for image acquisition, a near-infra LED light source for excitation, and DLP digital projector for projection of fluorescence image data onto the operating field in real time. Instrument control was implemented in Matlab for image capture, processing of acquired data and alignment of image parameters with the projected pattern. Accuracy of alignment was evaluated statistically to demonstrate sensitivity to small objects and alignment throughout the imaging field. After verification of accurate alignment, feasibility for clinical application was demonstrated in large animal models of sentinel lymph node biopsy. Indocyanine green was injected subcutaneously in Yorkshire pigs at various locations to model sentinel lymph node biopsy in gynecologic cancers, head and neck cancer, and melanoma. Fluorescence was detected by the camera system during operations and projected onto the imaging field, accurately identifying tissues containing the fluorescent tracer at up to 15 frames per second. Fluorescence information was projected as binary green regions after thresholding and denoising raw intensity data. Promising results with this initial clinical scale prototype provided encouraging results for the feasibility of optical projection of acquired luminescence during open oncologic surgeries.
Computerized lateral endoscopic approach to invertebral bodies
NASA Astrophysics Data System (ADS)
Abbasi, Hamid R.; Hariri, Sanaz; Kim, Daniel; Shahidi, Ramin; Steinberg, Gary
2001-05-01
Spinal surgery is often necessary to ease back pain symptoms. Neuronavigation (NN) allows the surgeon to localize the position of his instruments in 3D using pre- operative CT scans registered to intra-operative marker positions in cranial surgeries. However, this tool is unavailable in spinal surgeries for a variety of reasons. For example, because of the spine's many degrees of freedom and flexibility, the geometric relationship of the skin to the internal spinal anatomy is not fixed. Guided by the currently available imperfect 2D images, it is difficult for the surgeon to correct a patient's spinal anomaly; thus surgical relief of back pain is often only temporary. The Image Guidance Laborator's (IGL) goal is to combine the direct optical control of traditional endoscopy with the 3D orientation of NN. This powerful tool requires registration of the patient's anatomy to the surgical navigation system using internal landmarks rather than skin markers. Pre- operative CT scans matched with intraoperative fluoroscopic images can overcome the problem of spinal movement in NN registration. The combination of endoscopy with fluoroscopic registration of vertebral bodies in a NN system provides a 3D intra-operative navigational system for spinal neurosurgery to visualize the internal surgical environment from any orientation in real time. The accuracy of this system integration is being evaluated by assessing the success of nucleotomies and marker implantations guided by NN-registered endoscopy.
NASA Astrophysics Data System (ADS)
Sembiring, L.; Van Ormondt, M.; Van Dongeren, A. R.; Roelvink, J. A.
2017-07-01
Rip currents are one of the most dangerous coastal hazards for swimmers. In order to minimize the risk, a coastal operational-process based-model system can be utilized in order to provide forecast of nearshore waves and currents that may endanger beach goers. In this paper, an operational model for rip current prediction by utilizing nearshore bathymetry obtained from video image technique is demonstrated. For the nearshore scale model, XBeach1 is used with which tidal currents, wave induced currents (including the effect of the wave groups) can be simulated simultaneously. Up-to-date bathymetry will be obtained using video images technique, cBathy 2. The system will be tested for the Egmond aan Zee beach, located in the northern part of the Dutch coastline. This paper will test the applicability of bathymetry obtained from video technique to be used as input for the numerical modelling system by comparing simulation results using surveyed bathymetry and model results using video bathymetry. Results show that the video technique is able to produce bathymetry converging towards the ground truth observations. This bathymetry validation will be followed by an example of operational forecasting type of simulation on predicting rip currents. Rip currents flow fields simulated over measured and modeled bathymetries are compared in order to assess the performance of the proposed forecast system.
Code-modulated interferometric imaging system using phased arrays
NASA Astrophysics Data System (ADS)
Chauhan, Vikas; Greene, Kevin; Floyd, Brian
2016-05-01
Millimeter-wave (mm-wave) imaging provides compelling capabilities for security screening, navigation, and bio- medical applications. Traditional scanned or focal-plane mm-wave imagers are bulky and costly. In contrast, phased-array hardware developed for mass-market wireless communications and automotive radar promise to be extremely low cost. In this work, we present techniques which can allow low-cost phased-array receivers to be reconfigured or re-purposed as interferometric imagers, removing the need for custom hardware and thereby reducing cost. Since traditional phased arrays power combine incoming signals prior to digitization, orthogonal code-modulation is applied to each incoming signal using phase shifters within each front-end and two-bit codes. These code-modulated signals can then be combined and processed coherently through a shared hardware path. Once digitized, visibility functions can be recovered through squaring and code-demultiplexing operations. Pro- vided that codes are selected such that the product of two orthogonal codes is a third unique and orthogonal code, it is possible to demultiplex complex visibility functions directly. As such, the proposed system modulates incoming signals but demodulates desired correlations. In this work, we present the operation of the system, a validation of its operation using behavioral models of a traditional phased array, and a benchmarking of the code-modulated interferometer against traditional interferometer and focal-plane arrays.
PACS-based interface for 3D anatomical structure visualization and surgical planning
NASA Astrophysics Data System (ADS)
Koehl, Christophe; Soler, Luc; Marescaux, Jacques
2002-05-01
The interpretation of radiological image is routine but it remains a rather difficult task for physicians. It requires complex mental processes, that permit translation from 2D slices into 3D localization and volume determination of visible diseases. An easier and more extensive visualization and exploitation of medical images can be reached through the use of computer-based systems that provide real help from patient admission to post-operative followup. In this way, we have developed a 3D visualization interface linked to a PACS database that allows manipulation and interaction on virtual organs delineated from CT-scan or MRI. This software provides the 3D real-time surface rendering of anatomical structures, an accurate evaluation of volumes and distances and the improvement of radiological image analysis and exam annotation through a negatoscope tool. It also provides a tool for surgical planning allowing the positioning of an interactive laparoscopic instrument and the organ resection. The software system could revolutionize the field of computerized imaging technology. Indeed, it provides a handy and portable tool for pre-operative and intra-operative analysis of anatomy and pathology in various medical fields. This constitutes the first step of the future development of augmented reality and surgical simulation systems.
Tracker: Image-Processing and Object-Tracking System Developed
NASA Technical Reports Server (NTRS)
Klimek, Robert B.; Wright, Theodore W.
1999-01-01
Tracker is an object-tracking and image-processing program designed and developed at the NASA Lewis Research Center to help with the analysis of images generated by microgravity combustion and fluid physics experiments. Experiments are often recorded on film or videotape for analysis later. Tracker automates the process of examining each frame of the recorded experiment, performing image-processing operations to bring out the desired detail, and recording the positions of the objects of interest. It can load sequences of images from disk files or acquire images (via a frame grabber) from film transports, videotape, laser disks, or a live camera. Tracker controls the image source to automatically advance to the next frame. It can employ a large array of image-processing operations to enhance the detail of the acquired images and can analyze an arbitrarily large number of objects simultaneously. Several different tracking algorithms are available, including conventional threshold and correlation-based techniques, and more esoteric procedures such as "snake" tracking and automated recognition of character data in the image. The Tracker software was written to be operated by researchers, thus every attempt was made to make the software as user friendly and self-explanatory as possible. Tracker is used by most of the microgravity combustion and fluid physics experiments performed by Lewis, and by visiting researchers. This includes experiments performed on the space shuttles, Mir, sounding rockets, zero-g research airplanes, drop towers, and ground-based laboratories. This software automates the analysis of the flame or liquid s physical parameters such as position, velocity, acceleration, size, shape, intensity characteristics, color, and centroid, as well as a number of other measurements. It can perform these operations on multiple objects simultaneously. Another key feature of Tracker is that it performs optical character recognition (OCR). This feature is useful in extracting numerical instrumentation data that are embedded in images. All the results are saved in files for further data reduction and graphing. There are currently three Tracking Systems (workstations) operating near the laboratories and offices of Lewis Microgravity Science Division researchers. These systems are used independently by students, scientists, and university-based principal investigators. The researchers bring their tapes or films to the workstation and perform the tracking analysis. The resultant data files generated by the tracking process can then be analyzed on the spot, although most of the time researchers prefer to transfer them via the network to their offices for further analysis or plotting. In addition, many researchers have installed Tracker on computers in their office for desktop analysis of digital image sequences, which can be digitized by the Tracking System or some other means. Tracker has not only provided a capability to efficiently and automatically analyze large volumes of data, saving many hours of tedious work, but has also provided new capabilities to extract valuable information and phenomena that was heretofore undetected and unexploited.
Increasing situation awareness of the CBRNE robot operators
NASA Astrophysics Data System (ADS)
Jasiobedzki, Piotr; Ng, Ho-Kong; Bondy, Michel; McDiarmid, Carl H.
2010-04-01
Situational awareness of CBRN robot operators is quite limited, as they rely on images and measurements from on-board detectors. This paper describes a novel framework that enables a uniform and intuitive access to live and recent data via 2D and 3D representations of visited sites. These representations are created automatically and augmented with images, models and CBRNE measurements. This framework has been developed for CBRNE Crime Scene Modeler (C2SM), a mobile CBRNE mapping system. The system creates representations (2D floor plans and 3D photorealistic models) of the visited sites, which are then automatically augmented with CBRNE detector measurements. The data stored in a database is accessed using a variety of user interfaces providing different perspectives and increasing operators' situational awareness.
High frame rate imaging systems developed in Northwest Institute of Nuclear Technology
NASA Astrophysics Data System (ADS)
Li, Binkang; Wang, Kuilu; Guo, Mingan; Ruan, Linbo; Zhang, Haibing; Yang, Shaohua; Feng, Bing; Sun, Fengrong; Chen, Yanli
2007-01-01
This paper presents high frame rate imaging systems developed in Northwest Institute of Nuclear Technology in recent years. Three types of imaging systems are included. The first type of system utilizes EG&G RETICON Photodiode Array (PDA) RA100A as the image sensor, which can work at up to 1000 frame per second (fps). Besides working continuously, the PDA system is also designed to switch to capture flash light event working mode. A specific time sequence is designed to satisfy this request. The camera image data can be transmitted to remote area by coaxial or optic fiber cable and then be stored. The second type of imaging system utilizes PHOTOBIT Complementary Metal Oxygen Semiconductor (CMOS) PB-MV13 as the image sensor, which has a high resolution of 1280 (H) ×1024 (V) pixels per frame. The CMOS system can operate at up to 500fps in full frame and 4000fps partially. The prototype scheme of the system is presented. The third type of imaging systems adopts charge coupled device (CCD) as the imagers. MINTRON MTV-1881EX, DALSA CA-D1 and CA-D6 camera head are used in the systems development. The features comparison of the RA100A, PB-MV13, and CA-D6 based systems are given in the end.
Accessible magnetic resonance imaging.
Kaufman, L; Arakawa, M; Hale, J; Rothschild, P; Carlson, J; Hake, K; Kramer, D; Lu, W; Van Heteren, J
1989-10-01
The cost of magnetic resonance imaging (MRI) is driven by magnetic field strength. Misperceptions as to the impact of field strength on performance have led to systems that are more expensive than they need to be. Careful analysis of all the factors that affect diagnostic quality lead to the conclusion that field strength per se is not a strong determinant of system performance. Freed from the constraints imposed by high-field operation, it is possible to exploit a varied set of opportunities afforded by low-field operation. In addition to lower costs and easier siting, we can take advantage of shortened T1 times, higher contrast, reduced sensitivity to motion, and reduced radiofrequency power deposition. These conceptual advantages can be made to coalesce onto practical imaging systems. We describe a low-cost MRI system that utilizes a permanent magnet of open design. Careful optimization of receiving antennas and acquisition sequences permit performance levels consistent with those needed for an effective diagnostic unit. Ancillary advantages include easy access to the patient, reduced claustrophobia, quiet and comfortable operation, and absence of a missile effect. The system can be sited in 350 sq ft and consumes a modest amount of electricity. MRI equipment of this kind can widen the population base than can access this powerful and beneficial diagnostic modality.
A novel mechatronic tool for computer-assisted arthroscopy.
Dario, P; Carrozza, M C; Marcacci, M; D'Attanasio, S; Magnami, B; Tonet, O; Megali, G
2000-03-01
This paper describes a novel mechatronic tool for arthroscopy, which is at the same time a smart tool for traditional arthroscopy and the main component of a system for computer-assisted arthroscopy. The mechatronic arthroscope has a cable-actuated servomotor-driven multi-joint mechanical structure, is equipped with a position sensor measuring the orientation of the tip and with a force sensor detecting possible contact with delicate tissues in the knee, and incorporates an embedded microcontroller for sensor signal processing, motor driving and interfacing with the surgeon and/or the system control unit. When used manually, the mechatronic arthroscope enhances the surgeon's capabilities by enabling him/her to easily control tip motion and to prevent undesired contacts. When the tool is integrated in a complete system for computer-assisted arthroscopy, the trajectory of the arthroscope is reconstructed in real time by an optical tracking system using infrared emitters located in the handle, providing advantages in terms of improved intervention accuracy. The computer-assisted arthroscopy system comprises an image processing module for segmentation and three-dimensional reconstruction of preoperative computer tomography or magnetic resonance images, a registration module for measuring the position of the knee joint, tracking the trajectory of the operating tools, and matching preoperative and intra-operative images, and a human-machine interface that displays the enhanced reality scenario and data from the mechatronic arthroscope in a friendly and intuitive manner. By integrating preoperative and intra-operative images and information provided by the mechatronic arthroscope, the system allows virtual navigation in the knee joint during the planning phase and computer guidance by augmented reality during the intervention. This paper describes in detail the characteristics of the mechatronic arthroscope and of the system for computer-assisted arthroscopy and discusses experimental results obtained with a preliminary version of the tool and of the system.
An Asymmetric Image Encryption Based on Phase Truncated Hybrid Transform
NASA Astrophysics Data System (ADS)
Khurana, Mehak; Singh, Hukum
2017-09-01
To enhance the security of the system and to protect it from the attacker, this paper proposes a new asymmetric cryptosystem based on hybrid approach of Phase Truncated Fourier and Discrete Cosine Transform (PTFDCT) which adds non linearity by including cube and cube root operation in the encryption and decryption path respectively. In this cryptosystem random phase masks are used as encryption keys and phase masks generated after the cube operation in encryption process are reserved as decryption keys and cube root operation is required to decrypt image in decryption process. The cube and cube root operation introduced in the encryption and decryption path makes system resistant against standard attacks. The robustness of the proposed cryptosystem has been analysed and verified on the basis of various parameters by simulating on MATLAB 7.9.0 (R2008a). The experimental results are provided to highlight the effectiveness and suitability of the proposed cryptosystem and prove the system is secure.
[Dry view laser imager--a new economical photothermal imaging method].
Weberling, R
1996-11-01
The production of hard copies is currently achieved by means of laser imagers and wet film processing in systems attached either directly in or to the laser imager or in a darkroom. Variations in image quality resulting from a not always optimal wet film development are frequent. A newly developed thermographic film developer for laser films without liquid powdered chemicals, on the other hand, is environmentally preferable and reducing operating costs. The completely dry developing process provides permanent image documentation meeting the quality and safety requirements of RöV and BAK. One of the currently available systems of this type, the DryView Laser Imager is inexpensive and easy to install. The selective connection principle of the DryView Laser Imager can be expanded as required and accepts digital and/or analog interfaces with all imaging systems (CT, MR, DR, US, NM) from the various manufactures.
Image acquisition unit for the Mayo/IBM PACS project
NASA Astrophysics Data System (ADS)
Reardon, Frank J.; Salutz, James R.
1991-07-01
The Mayo Clinic and IBM Rochester, Minnesota, have jointly developed a picture archiving, distribution and viewing system for use with Mayo's CT and MRI imaging modalities. Images are retrieved from the modalities and sent over the Mayo city-wide token ring network to optical storage subsystems for archiving, and to server subsystems for viewing on image review stations. Images may also be retrieved from archive and transmitted back to the modalities. The subsystems that interface to the modalities and communicate to the other components of the system are termed Image Acquisition Units (LAUs). The IAUs are IBM Personal System/2 (PS/2) computers with specially developed software. They operate independently in a network of cooperative subsystems and communicate with the modalities, archive subsystems, image review server subsystems, and a central subsystem that maintains information about the content and location of images. This paper provides a detailed description of the function and design of the Image Acquisition Units.
ARIES: Enabling Visual Exploration and Organization of Art Image Collections.
Crissaff, Lhaylla; Wood Ruby, Louisa; Deutch, Samantha; DuBois, R Luke; Fekete, Jean-Daniel; Freire, Juliana; Silva, Claudio
2018-01-01
Art historians have traditionally used physical light boxes to prepare exhibits or curate collections. On a light box, they can place slides or printed images, move the images around at will, group them as desired, and visual-ly compare them. The transition to digital images has rendered this workflow obsolete. Now, art historians lack well-designed, unified interactive software tools that effectively support the operations they perform with physi-cal light boxes. To address this problem, we designed ARIES (ARt Image Exploration Space), an interactive image manipulation system that enables the exploration and organization of fine digital art. The system allows images to be compared in multiple ways, offering dynamic overlays analogous to a physical light box, and sup-porting advanced image comparisons and feature-matching functions, available through computational image processing. We demonstrate the effectiveness of our system to support art historians tasks through real use cases.
A real-time MTFC algorithm of space remote-sensing camera based on FPGA
NASA Astrophysics Data System (ADS)
Zhao, Liting; Huang, Gang; Lin, Zhe
2018-01-01
A real-time MTFC algorithm of space remote-sensing camera based on FPGA was designed. The algorithm can provide real-time image processing to enhance image clarity when the remote-sensing camera running on-orbit. The image restoration algorithm adopted modular design. The MTF measurement calculation module on-orbit had the function of calculating the edge extension function, line extension function, ESF difference operation, normalization MTF and MTFC parameters. The MTFC image filtering and noise suppression had the function of filtering algorithm and effectively suppressing the noise. The algorithm used System Generator to design the image processing algorithms to simplify the design structure of system and the process redesign. The image gray gradient dot sharpness edge contrast and median-high frequency were enhanced. The image SNR after recovery reduced less than 1 dB compared to the original image. The image restoration system can be widely used in various fields.
NASA Astrophysics Data System (ADS)
Lynam, Jeff R.
2001-09-01
A more highly integrated, electro-optical sensor suite using Laser Illuminated Viewing and Ranging (LIVAR) techniques is being developed under the Army Advanced Concept Technology- II (ACT-II) program for enhanced manportable target surveillance and identification. The ManPortable LIVAR system currently in development employs a wide-array of sensor technologies that provides the foot-bound soldier and UGV significant advantages and capabilities in lightweight, fieldable, target location, ranging and imaging systems. The unit incorporates a wide field-of-view, 5DEG x 3DEG, uncooled LWIR passive sensor for primary target location. Laser range finding and active illumination is done with a triggered, flash-lamp pumped, eyesafe micro-laser operating in the 1.5 micron region, and is used in conjunction with a range-gated, electron-bombarded CCD digital camera to then image the target objective in a more- narrow, 0.3$DEG, field-of-view. Target range determination is acquired using the integrated LRF and a target position is calculated using data from other onboard devices providing GPS coordinates, tilt, bank and corrected magnetic azimuth. Range gate timing and coordinated receiver optics focus control allow for target imaging operations to be optimized. The onboard control electronics provide power efficient, system operations for extended field use periods from the internal, rechargeable battery packs. Image data storage, transmission, and processing performance capabilities are also being incorporated to provide the best all-around support, for the electronic battlefield, in this type of system. The paper will describe flash laser illumination technology, EBCCD camera technology with flash laser detection system, and image resolution improvement through frame averaging.
LAS - LAND ANALYSIS SYSTEM, VERSION 5.0
NASA Technical Reports Server (NTRS)
Pease, P. B.
1994-01-01
The Land Analysis System (LAS) is an image analysis system designed to manipulate and analyze digital data in raster format and provide the user with a wide spectrum of functions and statistical tools for analysis. LAS offers these features under VMS with optional image display capabilities for IVAS and other display devices as well as the X-Windows environment. LAS provides a flexible framework for algorithm development as well as for the processing and analysis of image data. Users may choose between mouse-driven commands or the traditional command line input mode. LAS functions include supervised and unsupervised image classification, film product generation, geometric registration, image repair, radiometric correction and image statistical analysis. Data files accepted by LAS include formats such as Multi-Spectral Scanner (MSS), Thematic Mapper (TM) and Advanced Very High Resolution Radiometer (AVHRR). The enhanced geometric registration package now includes both image to image and map to map transformations. The over 200 LAS functions fall into image processing scenario categories which include: arithmetic and logical functions, data transformations, fourier transforms, geometric registration, hard copy output, image restoration, intensity transformation, multispectral and statistical analysis, file transfer, tape profiling and file management among others. Internal improvements to the LAS code have eliminated the VAX VMS dependencies and improved overall system performance. The maximum LAS image size has been increased to 20,000 lines by 20,000 samples with a maximum of 256 bands per image. The catalog management system used in earlier versions of LAS has been replaced by a more streamlined and maintenance-free method of file management. This system is not dependent on VAX/VMS and relies on file naming conventions alone to allow the use of identical LAS file names on different operating systems. While the LAS code has been improved, the original capabilities of the system have been preserved. These include maintaining associated image history, session logging, and batch, asynchronous and interactive mode of operation. The LAS application programs are integrated under version 4.1 of an interface called the Transportable Applications Executive (TAE). TAE 4.1 has four modes of user interaction: menu, direct command, tutor (or help), and dynamic tutor. In addition TAE 4.1 allows the operation of LAS functions using mouse-driven commands under the TAE-Facelift environment provided with TAE 4.1. These modes of operation allow users, from the beginner to the expert, to exercise specific application options. LAS is written in C-language and FORTRAN 77 for use with DEC VAX computers running VMS with approximately 16Mb of physical memory. This program runs under TAE 4.1. Since TAE 4.1 is not a current version of TAE, TAE 4.1 is included within the LAS distribution. Approximately 130,000 blocks (65Mb) of disk storage space are necessary to store the source code and files generated by the installation procedure for LAS and 44,000 blocks (22Mb) of disk storage space are necessary for TAE 4.1 installation. The only other dependencies for LAS are the subroutine libraries for the specific display device(s) that will be used with LAS/DMS (e.g. X-Windows and/or IVAS). The standard distribution medium for LAS is a set of two 9track 6250 BPI magnetic tapes in DEC VAX BACKUP format. It is also available on a set of two TK50 tape cartridges in DEC VAX BACKUP format. This program was developed in 1986 and last updated in 1992.
The use of LANDSAT DCS and imagery in reservoir management and operation
NASA Technical Reports Server (NTRS)
Cooper, S.; Bock, P.; Horowitz, J.; Foran, D.
1975-01-01
Experiments by the New England Division (NED), Corps of Engineers with LANDSAT-1 data collection and imaging systems are reported. Data cover the future usefulness of data products received from satellites such as LANDSAT in the day to day operation of NED water resources systems used to control floods.
Design of the compact high-resolution imaging spectrometer (CHRIS), and future developments
NASA Astrophysics Data System (ADS)
Cutter, Mike; Lobb, Dan
2017-11-01
The CHRIS instrument was launched on ESA's PROBA platform in October 2001, and is providing hyperspectral images of selected ground areas at 17m ground sampling distance, in the spectral range 415nm to 1050nm. Platform agility allows image sets to be taken at multiple view angles in each overpass. The design of the instrument is briefly outlined, including design of optics, structures, detection and in-flight calibration system. Lessons learnt from construction and operation of the experimental system, and possible design directions for future hyperspectral systems, are discussed.
2009-03-04
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the GOES-O satellite will undergo final testing of the imaging system, instrumentation, communications and power systems. The latest Geostationary Operational Environmental Satellite, GOES-O was developed by NASA for the National Oceanic and Atmospheric Administration, or NOAA. The GOES-O satellite is targeted to launch April 28 onboard a United Launch Alliance Delta IV expendable launch vehicle. Once in orbit, GOES-O will be designated GOES-14, and NASA will provide on-orbit checkout and then transfer operational responsibility to NOAA. GOES-O will be placed in on-orbit storage as a replacement for an older GOES satellite. GOES-O carries an advanced attitude control system using star trackers with spacecraft optical bench Imager and Sounder mountings that provide enhanced instrument pointing performance for improved image navigation and registration to better locate severe storms and other events important to the NOAA National Weather Service. Photo credit: NASA/Kim Shiflett
Analysis and design of stereoscopic display in stereo television endoscope system
NASA Astrophysics Data System (ADS)
Feng, Dawei
2008-12-01
Many 3D displays have been proposed for medical use. When we design and evaluate new system, there are three demands from surgeons. Priority is the precision. Secondly, displayed images should be easy to understand, In addition, surgery lasts hours and hours, they do not like fatiguing display. The stereo television endoscope researched in this paper make celiac viscera image on the photosurface of the left and right CCD by imitating human binocular stereo vision effect by using the double-optical lines system. The left and right video signal will be processed by frequency multiplication and display on the monitor, people can observe the stereo image which has depth impression by using a polarized LCD screen and a pair of polarized glasses. Clinical experiments show that by using the stereo TV endoscope people can make minimally invasive surgery more safe and reliable, and can shorten the operation time, and can improve the operation accuracy.
Fiber Optic Communication System For Medical Images
NASA Astrophysics Data System (ADS)
Arenson, Ronald L.; Morton, Dan E.; London, Jack W.
1982-01-01
This paper discusses a fiber optic communication system linking ultrasound devices, Computerized tomography scanners, Nuclear Medicine computer system, and a digital fluoro-graphic system to a central radiology research computer. These centrally archived images are available for near instantaneous recall at various display consoles. When a suitable laser optical disk is available for mass storage, more extensive image archiving will be added to the network including digitized images of standard radiographs for comparison purposes and for remote display in such areas as the intensive care units, the operating room, and selected outpatient departments. This fiber optic system allows for a transfer of high resolution images in less than a second over distances exceeding 2,000 feet. The advantages of using fiber optic cables instead of typical parallel or serial communication techniques will be described. The switching methodology and communication protocols will also be discussed.
NASA Astrophysics Data System (ADS)
Alqasemi, Umar; Li, Hai; Yuan, Guangqian; Kumavor, Patrick; Zanganeh, Saeid; Zhu, Quing
2014-07-01
Coregistered ultrasound (US) and photoacoustic imaging are emerging techniques for mapping the echogenic anatomical structure of tissue and its corresponding optical absorption. We report a 128-channel imaging system with real-time coregistration of the two modalities, which provides up to 15 coregistered frames per second limited by the laser pulse repetition rate. In addition, the system integrates a compact transvaginal imaging probe with a custom-designed fiber optic assembly for in vivo detection and characterization of human ovarian tissue. We present the coregistered US and photoacoustic imaging system structure, the optimal design of the PC interfacing software, and the reconfigurable field programmable gate array operation and optimization. Phantom experiments of system lateral resolution and axial sensitivity evaluation, examples of the real-time scanning of a tumor-bearing mouse, and ex vivo human ovaries studies are demonstrated.
NASA Astrophysics Data System (ADS)
Rodgers, Jessica R.; Surry, Kathleen; D'Souza, David; Leung, Eric; Fenster, Aaron
2017-03-01
Treatment for gynaecological cancers often includes brachytherapy; in particular, in high-dose-rate (HDR) interstitial brachytherapy, hollow needles are inserted into the tumour and surrounding area through a template in order to deliver the radiation dose. Currently, there is no standard modality for visualizing needles intra-operatively, despite the need for precise needle placement in order to deliver the optimal dose and avoid nearby organs, including the bladder and rectum. While three-dimensional (3D) transrectal ultrasound (TRUS) imaging has been proposed for 3D intra-operative needle guidance, anterior needles tend to be obscured by shadowing created by the template's vaginal cylinder. We have developed a 360-degree 3D transvaginal ultrasound (TVUS) system that uses a conventional two-dimensional side-fire TRUS probe rotated inside a hollow vaginal cylinder made from a sonolucent plastic (TPX). The system was validated using grid and sphere phantoms in order to test the geometric accuracy of the distance and volumetric measurements in the reconstructed image. To test the potential for visualizing needles, an agar phantom mimicking the geometry of the female pelvis was used. Needles were inserted into the phantom and then imaged using the 3D TVUS system. The needle trajectories and tip positions in the 3D TVUS scan were compared to their expected values and the needle tracks visualized in magnetic resonance images. Based on this initial study, 360-degree 3D TVUS imaging through a sonolucent vaginal cylinder is a feasible technique for intra-operatively visualizing needles during HDR interstitial gynaecological brachytherapy.
Three applications of backscatter x-ray imaging technology to homeland defense
NASA Astrophysics Data System (ADS)
Chalmers, Alex
2005-05-01
A brief review of backscatter x-ray imaging and a description of three systems currently applying it to homeland defense missions (BodySearch, ZBV and ZBP). These missions include detection of concealed weapons, explosives and contraband on personnel, in vehicles and large cargo containers. An overview of the x-ray imaging subsystems is provided as well as sample images from each system. Key features such as x-ray safety, throughput and detection are discussed. Recent trends in operational modes are described that facilitate 100% inspection at high throughput chokepoints.
Proteus: a reconfigurable computational network for computer vision
NASA Astrophysics Data System (ADS)
Haralick, Robert M.; Somani, Arun K.; Wittenbrink, Craig M.; Johnson, Robert; Cooper, Kenneth; Shapiro, Linda G.; Phillips, Ihsin T.; Hwang, Jenq N.; Cheung, William; Yao, Yung H.; Chen, Chung-Ho; Yang, Larry; Daugherty, Brian; Lorbeski, Bob; Loving, Kent; Miller, Tom; Parkins, Larye; Soos, Steven L.
1992-04-01
The Proteus architecture is a highly parallel MIMD, multiple instruction, multiple-data machine, optimized for large granularity tasks such as machine vision and image processing The system can achieve 20 Giga-flops (80 Giga-flops peak). It accepts data via multiple serial links at a rate of up to 640 megabytes/second. The system employs a hierarchical reconfigurable interconnection network with the highest level being a circuit switched Enhanced Hypercube serial interconnection network for internal data transfers. The system is designed to use 256 to 1,024 RISC processors. The processors use one megabyte external Read/Write Allocating Caches for reduced multiprocessor contention. The system detects, locates, and replaces faulty subsystems using redundant hardware to facilitate fault tolerance. The parallelism is directly controllable through an advanced software system for partitioning, scheduling, and development. System software includes a translator for the INSIGHT language, a parallel debugger, low and high level simulators, and a message passing system for all control needs. Image processing application software includes a variety of point operators neighborhood, operators, convolution, and the mathematical morphology operations of binary and gray scale dilation, erosion, opening, and closing.
Brain imaging registry for neurologic diagnosis and research
NASA Astrophysics Data System (ADS)
Hoo, Kent S., Jr.; Wong, Stephen T. C.; Knowlton, Robert C.; Young, Geoffrey S.; Walker, John; Cao, Xinhua; Dillon, William P.; Hawkins, Randall A.; Laxer, Kenneth D.
2002-05-01
The purpose of this paper is to demonstrate the importance of building a brain imaging registry (BIR) on top of existing medical information systems including Picture Archiving Communication Systems (PACS) environment. We describe the design framework for a cluster of data marts whose purpose is to provide clinicians and researchers efficient access to a large volume of raw and processed patient images and associated data originating from multiple operational systems over time and spread out across different hospital departments and laboratories. The framework is designed using object-oriented analysis and design methodology. The BIR data marts each contain complete image and textual data relating to patients with a particular disease.
Hyperspectral imaging using the single-pixel Fourier transform technique
NASA Astrophysics Data System (ADS)
Jin, Senlin; Hui, Wangwei; Wang, Yunlong; Huang, Kaicheng; Shi, Qiushuai; Ying, Cuifeng; Liu, Dongqi; Ye, Qing; Zhou, Wenyuan; Tian, Jianguo
2017-03-01
Hyperspectral imaging technology is playing an increasingly important role in the fields of food analysis, medicine and biotechnology. To improve the speed of operation and increase the light throughput in a compact equipment structure, a Fourier transform hyperspectral imaging system based on a single-pixel technique is proposed in this study. Compared with current imaging spectrometry approaches, the proposed system has a wider spectral range (400-1100 nm), a better spectral resolution (1 nm) and requires fewer measurement data (a sample rate of 6.25%). The performance of this system was verified by its application to the non-destructive testing of potatoes.
Simulation of digital mammography images
NASA Astrophysics Data System (ADS)
Workman, Adam
2005-04-01
A number of different technologies are available for digital mammography. However, it is not clear how differences in the physical performance aspects of the different imaging technologies affect clinical performance. Randomised controlled trials provide a means of gaining information on clinical performance however do not provide direct comparison of the different digital imaging technologies. This work describes a method of simulating the performance of different digital mammography systems. The method involves modifying the imaging performance parameters of images from a small field of view (SFDM), high resolution digital imaging system used for spot imaging. Under normal operating conditions this system produces images with higher signal-to-noise ratio (SNR) over a wide spatial frequency range than current full field digital mammography (FFDM) systems. The SFDM images can be 'degraded" by computer processing to simulate the characteristics of a FFDM system. Initial work characterised the physical performance (MTF, NPS) of the SFDM detector and developed a model and method for simulating signal transfer and noise properties of a FFDM system. It was found that the SNR properties of the simulated FFDM images were very similar to those measured from an actual FFDM system verifying the methodology used. The application of this technique to clinical images from the small field system will allow the clinical performance of different FFDM systems to be simulated and directly compared using the same clinical image datasets.
Design of a handheld infrared imaging device based on uncooled infrared detector
NASA Astrophysics Data System (ADS)
Sun, Xianzhong; Li, Junwei; Zhang, Yazhou
2017-02-01
This paper, we introduced the system structure and operation principle of the device, and discussed our solutions for image data acquisition and storage, operating states and modes control and power management in detail. Besides, we proposed a algorithm of pseudo color for thermal image and applied it to the image processing module of the device. The thermal images can be real time displayed in a 1.8 inches TFT-LCD. The device has a compacted structure and can be held easily by one hand. It also has a good imaging performance with low power consumption, thermal sensitivity is less than 150mK. At last, we introduced one of its applications for fault diagnosis in electronic circuits, the test shows that: it's a good solution for fast fault detection.
Sim, K S; Teh, V; Tey, Y C; Kho, T K
2016-11-01
This paper introduces new development technique to improve the Scanning Electron Microscope (SEM) image quality and we name it as sub-blocking multiple peak histogram equalization (SUB-B-MPHE) with convolution operator. By using this new proposed technique, it shows that the new modified MPHE performs better than original MPHE. In addition, the sub-blocking method consists of convolution operator which can help to remove the blocking effect for SEM images after applying this new developed technique. Hence, by using the convolution operator, it effectively removes the blocking effect by properly distributing the suitable pixel value for the whole image. Overall, the SUB-B-MPHE with convolution outperforms the rest of methods. SCANNING 38:492-501, 2016. © 2015 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
System of radiographic control or an imaging system for personal radiographic inspection
NASA Astrophysics Data System (ADS)
Babichev, E. A.; Baru, S. E.; Neustroev, V. A.; Leonov, V. V.; Porosev, V. V.; Savinov, G. A.; Ukraintsev, Yu. G.
2004-06-01
The security system of personal radiographic inspection for detection of explosive materials and plastic weapons was developed in BINP recently. Basic system parameters are: maximum scanning height— 2000 mm, image width— 800 mm, number of detector channels—768, channel size— 1.05×1 mm, charge collecting time for one line—2, 5 ms, scanning speed— 40 cm/s, maximum scanning time— 5 s, radiation dose per one inspection <5 μSv. The detector is a multichannel ionization Xe chamber. The image of inspected person will appear on the display just after scanning. The pilot sample of this system was put into operation in March, 2003.b
Distance preservation in color image transforms
NASA Astrophysics Data System (ADS)
Santini, Simone
1999-12-01
Most current image processing systems work on color images, and color is a precious perceptual clue for determining image similarity. Working with color images, however, is not the sam thing as working with images taking values in a 3D Euclidean space. Not only are color spaces bounded, but the characteristics of the observer endow the space with a 'perceptual' metric that in general does not correspond to the metric naturally inherited from R3. This paper studies the problem of filtering color images abstractly. It begins by determining the properties of the color sum and color product operations such that he desirable properties of orthonormal bases will be preserved. The paper then defines a general scheme, based on the action of the additive group on the color space, by which operations that satisfy the required properties can be defined.
Jiang, Y L; Yu, J P; Sun, H T; Guo, F X; Ji, Z; Fan, J H; Zhang, L J; Li, X; Wang, J J
2017-08-01
Objective: To compare the post-implant target volumes and dosimetric evaluation with pre-plan, the gross tumor volume(GTV) by CT image fusion-based and the manual delineation of target volume in CT guided radioactive seeds implantation. Methods: A total of 10 patients treated under CT-guidance (125)I seed implantation during March 2016 to April 2016 were analyzed in Peking University Third Hospital.All patients underwent pre-operative CT simulation, pre-operative planning, implantation seeds, CT scanning after seed implantation and dosimetric evaluation of GTV.In every patient, post-implant target volumes were delineated by both two methods, and were divided into two groups. Group 1: image fusion pre-implantation simulation and post-operative CT image, then the contours of GTV were automatically performed by brachytherapy treatment planning system; Group 2: the contouring of the GTV on post-operative CT image were performed manually by three senior radiation oncologists independently. The average of three data was sets. Statistical analyses were performed using SPSS software, version 3.2.0. The paired t -test was used to compare the target volumes and D(90) parameters in two modality. Results: In Group 1, average volume of GTV in post-operation group was 12-167(73±56) cm(3). D(90) was 101-153 (142±19)Gy. In Group 2, they were 14-186(80±58)cm(3) and 96-146(122±16) Gy respectively. In both target volumes and D(90), there was no statistical difference between pre-operation and post-operation in Group 1.The D(90) was slightly lower than that of pre-plan group, but there was no statistical difference ( P =0.142); in Group 2, between pre-operation and post-operation group, there was a significant statistical difference in the GTV ( P =0.002). The difference of D(90) was similarly ( P <0.01). Conclusion: The method of delineation of post-implant GTV through fusion pre-implantation simulation and post-operative CT scan images, the contours of GTV are automatically performed by brachytherapy treatment planning system appears to have improved more accuracy, reproducibility and convenience than manual delineation of target volume by maximum reduce the interference from artificial factor and metal artifacts. Further work and more cases are required in the future.
Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications
NASA Technical Reports Server (NTRS)
Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.
Differential morphology and image processing.
Maragos, P
1996-01-01
Image processing via mathematical morphology has traditionally used geometry to intuitively understand morphological signal operators and set or lattice algebra to analyze them in the space domain. We provide a unified view and analytic tools for morphological image processing that is based on ideas from differential calculus and dynamical systems. This includes ideas on using partial differential or difference equations (PDEs) to model distance propagation or nonlinear multiscale processes in images. We briefly review some nonlinear difference equations that implement discrete distance transforms and relate them to numerical solutions of the eikonal equation of optics. We also review some nonlinear PDEs that model the evolution of multiscale morphological operators and use morphological derivatives. Among the new ideas presented, we develop some general 2-D max/min-sum difference equations that model the space dynamics of 2-D morphological systems (including the distance computations) and some nonlinear signal transforms, called slope transforms, that can analyze these systems in a transform domain in ways conceptually similar to the application of Fourier transforms to linear systems. Thus, distance transforms are shown to be bandpass slope filters. We view the analysis of the multiscale morphological PDEs and of the eikonal PDE solved via weighted distance transforms as a unified area in nonlinear image processing, which we call differential morphology, and briefly discuss its potential applications to image processing and computer vision.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, C.S.; af Ekenstam, G.; Sallstrom, M.
1995-07-01
The Swedish Nuclear Power Inspectorate (SKI) and the US Department of Energy (DOE) sponsored work on a Remote Monitoring System (RMS) that was installed in August 1994 at the Barseback Works north of Malmo, Sweden. The RMS was designed to test the front end detection concept that would be used for unattended remote monitoring activities. Front end detection reduces the number of video images recorded and provides additional sensor verification of facility operations. The function of any safeguards Containment and Surveillance (C/S) system is to collect information which primarily is images that verify the operations at a nuclear facility. Barsebackmore » is ideal to test the concept of front end detection since most activities of safeguards interest is movement of spent fuel which occurs once a year. The RMS at Barseback uses a network of nodes to collect data from microwave motion detectors placed to detect the entrance and exit of spent fuel casks through a hatch. A video system using digital compression collects digital images and stores them on a hard drive and a digital optical disk. Data and images from the storage area are remotely monitored via telephone from Stockholm, Sweden and Albuquerque, NM, USA. These remote monitoring stations operated by SKI and SNL respectively, can retrieve data and images from the RMS computer at the Barseback Facility. The data and images are encrypted before transmission. This paper presents details of the RMS and test results of this approach to front end detection of safeguard activities.« less
Lee, Changho; Kim, Kyungun; Han, Seunghoon; Kim, Sehui; Lee, Jun Hoon; Kim, Hong kyun; Kim, Chulhong; Jung, Woonggyu; Kim, Jeehyun
2014-01-01
Abstract. An intraoperative surgical microscope is an essential tool in a neuro- or ophthalmological surgical environment. Yet, it has an inherent limitation to classify subsurface information because it only provides the surface images. To compensate for and assist in this problem, combining the surgical microscope with optical coherence tomography (OCT) has been adapted. We developed a real-time virtual intraoperative surgical OCT (VISOCT) system by adapting a spectral-domain OCT scanner with a commercial surgical microscope. Thanks to our custom-made beam splitting and image display subsystems, the OCT images and microscopic images are simultaneously visualized through an ocular lens or the eyepiece of the microscope. This improvement helps surgeons to focus on the operation without distraction to view OCT images on another separate display. Moreover, displaying the OCT live images on the eyepiece helps surgeon’s depth perception during the surgeries. Finally, we successfully processed stimulated penetrating keratoplasty in live rabbits. We believe that these technical achievements are crucial to enhance the usability of the VISOCT system in a real surgical operating condition. PMID:24604471
NASDA's Advanced On-Line System (ADOLIS)
NASA Technical Reports Server (NTRS)
Yamamoto, Yoshikatsu; Hara, Hideo; Yamada, Shigeo; Hirata, Nobuyuki; Komatsu, Shigenori; Nishihata, Seiji; Oniyama, Akio
1993-01-01
Spacecraft operations including ground system operations are generally realized by various large or small scale group work which is done by operators, engineers, managers, users and so on, and their positions are geographically distributed in many cases. In face-to-face work environments, it is easy for them to understand each other. However, in distributed work environments which need communication media, if only using audio, they become estranged from each other and lose interest in and continuity of work. It is an obstacle to smooth operation of spacecraft. NASDA has developed an experimental model of a new real-time operation control system called 'ADOLIS' (ADvanced On-Line System) adopted to such a distributed environment using a multi-media system dealing with character, figure, image, handwriting, video and audio information which is accommodated to operation systems of a wide range including spacecraft and ground systems. This paper describes the results of the development of the experimental model.
On techniques for angle compensation in nonideal iris recognition.
Schuckers, Stephanie A C; Schmid, Natalia A; Abhyankar, Aditya; Dorairaj, Vivekanand; Boyce, Christopher K; Hornak, Lawrence A
2007-10-01
The popularity of the iris biometric has grown considerably over the past two to three years. Most research has been focused on the development of new iris processing and recognition algorithms for frontal view iris images. However, a few challenging directions in iris research have been identified, including processing of a nonideal iris and iris at a distance. In this paper, we describe two nonideal iris recognition systems and analyze their performance. The word "nonideal" is used in the sense of compensating for off-angle occluded iris images. The system is designed to process nonideal iris images in two steps: 1) compensation for off-angle gaze direction and 2) processing and encoding of the rotated iris image. Two approaches are presented to account for angular variations in the iris images. In the first approach, we use Daugman's integrodifferential operator as an objective function to estimate the gaze direction. After the angle is estimated, the off-angle iris image undergoes geometric transformations involving the estimated angle and is further processed as if it were a frontal view image. The encoding technique developed for a frontal image is based on the application of the global independent component analysis. The second approach uses an angular deformation calibration model. The angular deformations are modeled, and calibration parameters are calculated. The proposed method consists of a closed-form solution, followed by an iterative optimization procedure. The images are projected on the plane closest to the base calibrated plane. Biorthogonal wavelets are used for encoding to perform iris recognition. We use a special dataset of the off-angle iris images to quantify the performance of the designed systems. A series of receiver operating characteristics demonstrate various effects on the performance of the nonideal-iris-based recognition system.
NASA Astrophysics Data System (ADS)
Schuetz, Christopher; Martin, Richard; Dillon, Thomas; Yao, Peng; Mackrides, Daniel; Harrity, Charles; Zablocki, Alicia; Shreve, Kevin; Bonnett, James; Curt, Petersen; Prather, Dennis
2013-05-01
Passive imaging using millimeter waves (mmWs) has many advantages and applications in the defense and security markets. All terrestrial bodies emit mmW radiation and these wavelengths are able to penetrate smoke, fog/clouds/marine layers, and even clothing. One primary obstacle to imaging in this spectrum is that longer wavelengths require larger apertures to achieve the resolutions desired for many applications. Accordingly, lens-based focal plane systems and scanning systems tend to require large aperture optics, which increase the achievable size and weight of such systems to beyond what can be supported by many applications. To overcome this limitation, a distributed aperture detection scheme is used in which the effective aperture size can be increased without the associated volumetric increase in imager size. This distributed aperture system is realized through conversion of the received mmW energy into sidebands on an optical carrier. This conversion serves, in essence, to scale the mmW sparse aperture array signals onto a complementary optical array. The side bands are subsequently stripped from the optical carrier and recombined to provide a real time snapshot of the mmW signal. Using this technique, we have constructed a real-time, video-rate imager operating at 75 GHz. A distributed aperture consisting of 220 upconversion channels is used to realize 2.5k pixels with passive sensitivity. Details of the construction and operation of this imager as well as field testing results will be presented herein.
Overview of TAMU-CC Unmanned Aircraft Systems Coastal Research in the Port Mansfield Area, June 2015
NASA Astrophysics Data System (ADS)
Starek, M. J.; Bridges, D. H.
2016-02-01
In June, 2015, the TAMU-CC Unmanned Aircraft Systems Program, with the support of the Lone Star UAS Center of Excellence and Innovation, conducted a week-long UAS exercise in the coastal region near Port Mansfield, Texas. The platform used was TAMU-CC's RS-16, a variant of the Arcturus T-16XL, that was equipped with a three-camera imaging system which acquired high-resolution images in the optical range of the electromagnetic spectrum and lower resolution images in the infrared and ultraviolet ranges of the spectrum. The RS-16 has a wingspan of 12.9 ft, a typical take-off weight of 70 lbs, and a typical cruising speed of 60 kt. A total of 9 flights were conducted over 7 days, with a total of 22.9 flight hours. Different areas of interest were mapped for different researchers investigating specific coastal phenomena. This poster will describe the overall operational aspects of the exercise. The aircraft and imaging system will be described in detail, as will the operational procedures and subsequent data reduction procedures. The process of selection of the coastal regions for investigation and the flight planning involved in mapping those regions will be discussed. A summary of the resulting image data will be presented.
Digital Image Display Control System, DIDCS. [for astronomical analysis
NASA Technical Reports Server (NTRS)
Fischel, D.; Klinglesmith, D. A., III
1979-01-01
DIDCS is an interactive image display and manipulation system that is used for a variety of astronomical image reduction and analysis operations. The hardware system consists of a PDP 11/40 main frame with 32K of 16-bit core memory; 96K of 16-bit MOS memory; two 9 track 800 BPI tape drives; eight 2.5 million byte RKO5 type disk packs, three user terminals, and a COMTAL 8000-S display system which has sufficient memory to store and display three 512 x 512 x 8 bit images along with an overlay plane and function table for each image, a pseudo color table and the capability for displaying true color. The software system is based around the language FORTH, which will permit an open ended dictionary of user level words for image analyses and display. A description of the hardware and software systems will be presented along with examples of the types of astronomical research that are being performed. Also a short discussion of the commonality and exchange of this type of image analysis system will be given.
A CMOS-based large-area high-resolution imaging system for high-energy x-ray applications
NASA Astrophysics Data System (ADS)
Rodricks, Brian; Fowler, Boyd; Liu, Chiao; Lowes, John; Haeffner, Dean; Lienert, Ulrich; Almer, John
2008-08-01
CCDs have been the primary sensor in imaging systems for x-ray diffraction and imaging applications in recent years. CCDs have met the fundamental requirements of low noise, high-sensitivity, high dynamic range and spatial resolution necessary for these scientific applications. State-of-the-art CMOS image sensor (CIS) technology has experienced dramatic improvements recently and their performance is rivaling or surpassing that of most CCDs. The advancement of CIS technology is at an ever-accelerating pace and is driven by the multi-billion dollar consumer market. There are several advantages of CIS over traditional CCDs and other solid-state imaging devices; they include low power, high-speed operation, system-on-chip integration and lower manufacturing costs. The combination of superior imaging performance and system advantages makes CIS a good candidate for high-sensitivity imaging system development. This paper will describe a 1344 x 1212 CIS imaging system with a 19.5μm pitch optimized for x-ray scattering studies at high-energies. Fundamental metrics of linearity, dynamic range, spatial resolution, conversion gain, sensitivity are estimated. The Detective Quantum Efficiency (DQE) is also estimated. Representative x-ray diffraction images are presented. Diffraction images are compared against a CCD-based imaging system.
Automated Formosat Image Processing System for Rapid Response to International Disasters
NASA Astrophysics Data System (ADS)
Cheng, M. C.; Chou, S. C.; Chen, Y. C.; Chen, B.; Liu, C.; Yu, S. J.
2016-06-01
FORMOSAT-2, Taiwan's first remote sensing satellite, was successfully launched in May of 2004 into the Sun-synchronous orbit at 891 kilometers of altitude. With the daily revisit feature, the 2-m panchromatic, 8-m multi-spectral resolution images captured have been used for researches and operations in various societal benefit areas. This paper details the orchestration of various tasks conducted in different institutions in Taiwan in the efforts responding to international disasters. The institutes involved including its space agency-National Space Organization (NSPO), Center for Satellite Remote Sensing Research of National Central University, GIS Center of Feng-Chia University, and the National Center for High-performance Computing. Since each institution has its own mandate, the coordinated tasks ranged from receiving emergency observation requests, scheduling and tasking of satellite operation, downlink to ground stations, images processing including data injection, ortho-rectification, to delivery of image products. With the lessons learned from working with international partners, the FORMOSAT Image Processing System has been extensively automated and streamlined with a goal to shorten the time between request and delivery in an efficient manner. The integrated team has developed an Application Interface to its system platform that provides functions of search in archive catalogue, request of data services, mission planning, inquiry of services status, and image download. This automated system enables timely image acquisition and substantially increases the value of data product. Example outcome of these efforts in recent response to support Sentinel Asia in Nepal Earthquake is demonstrated herein.
Real-time Implementation of a Dual-Mode Ultrasound Array System: In Vivo Results
Casper, Andrew J.; Liu, Dalong; Ballard, John R.; Ebbini, Emad S.
2013-01-01
A real-time dual-mode ultrasound array (DMUA) system for imaging and therapy is described. The system utilizes a concave (40-mm radius of curvature) 3.5 MHz, 32 element array and modular multi-channel transmitter/receiver. It is capable of operating in a variety of imaging and therapy modes (on transmit) and continuous receive on all array elements even during high-power operation. A signal chain consisting of field-programmable gate arrays (FPGA) and graphical processing units (GPU) is used to enable real-time, software-defined beamforming and image formation. Imaging data, from quality assurance phantoms as well as in vivo small and large animal models, are presented and discussed. Corresponding images obtained using a temporally-synchronized and spatially-aligned diagnostic probe confirm the DMUA’s ability to form anatomically-correct images with sufficient contrast in an extended field of view (FOV) around its geometric center. In addition, high frame rate DMUA data also demonstrate the feasibility of detection and localization of echo changes indicative of cavitation and/or tissue boiling during HIFU exposures with 45 – 50 dB dynamic range. The results also show that the axial and lateral resolution of the DMUA are consistent with its fnumber and bandwidth with well behaved speckle cell characteristics. These results point the way to a theranostic DMUA system capable of quantitative imaging of tissue property changes with high specificity to lesion formation using focused ultrasound. PMID:23708766
Acousto-optic laser projection systems for displaying TV information
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gulyaev, Yu V; Kazaryan, M A; Mokrushin, Yu M
2015-04-30
This review addresses various approaches to television projection imaging on large screens using lasers. Results are presented of theoretical and experimental studies of an acousto-optic projection system operating on the principle of projecting an image of an entire amplitude-modulated television line in a single laser pulse. We consider characteristic features of image formation in such a system and the requirements for its individual components. Particular attention is paid to nonlinear distortions of the image signal, which show up most severely at low modulation signal frequencies. We discuss the feasibility of improving the process efficiency and image quality using acousto-optic modulatorsmore » and pulsed lasers. Real-time projectors with pulsed line imaging can be used for controlling high-intensity laser radiation. (review)« less
Lock-In Imaging System for Detecting Disturbances in Fluid
NASA Technical Reports Server (NTRS)
Park, Yeonjoon (Inventor); Choi, Sang Hyouk (Inventor); King, Glen C. (Inventor); Elliott, James R. (Inventor); Dimarcantonio, Albert L. (Inventor)
2014-01-01
A lock-in imaging system is configured for detecting a disturbance in air. The system includes an airplane, an interferometer, and a telescopic imaging camera. The airplane includes a fuselage and a pair of wings. The airplane is configured for flight in air. The interferometer is operatively disposed on the airplane and configured for producing an interference pattern by splitting a beam of light into two beams along two paths and recombining the two beams at a junction point in a front flight path of the airplane during flight. The telescopic imaging camera is configured for capturing an image of the beams at the junction point. The telescopic imaging camera is configured for detecting the disturbance in air in an optical path, based on an index of refraction of the image, as detected at the junction point.
Night vision imaging system design, integration and verification in spacecraft vacuum thermal test
NASA Astrophysics Data System (ADS)
Shang, Yonghong; Wang, Jing; Gong, Zhe; Li, Xiyuan; Pei, Yifei; Bai, Tingzhu; Zhen, Haijing
2015-08-01
The purposes of spacecraft vacuum thermal test are to characterize the thermal control systems of the spacecraft and its component in its cruise configuration and to allow for early retirement of risks associated with mission-specific and novel thermal designs. The orbit heat flux is simulating by infrared lamp, infrared cage or electric heater. As infrared cage and electric heater do not emit visible light, or infrared lamp just emits limited visible light test, ordinary camera could not operate due to low luminous density in test. Moreover, some special instruments such as satellite-borne infrared sensors are sensitive to visible light and it couldn't compensate light during test. For improving the ability of fine monitoring on spacecraft and exhibition of test progress in condition of ultra-low luminous density, night vision imaging system is designed and integrated by BISEE. System is consist of high-gain image intensifier ICCD camera, assistant luminance system, glare protect system, thermal control system and computer control system. The multi-frame accumulation target detect technology is adopted for high quality image recognition in captive test. Optical system, mechanical system and electrical system are designed and integrated highly adaptable to vacuum environment. Molybdenum/Polyimide thin film electrical heater controls the temperature of ICCD camera. The results of performance validation test shown that system could operate under vacuum thermal environment of 1.33×10-3Pa vacuum degree and 100K shroud temperature in the space environment simulator, and its working temperature is maintains at 5° during two-day test. The night vision imaging system could obtain video quality of 60lp/mm resolving power.
Operation and Performance of the Mars Exploration Rover Imaging System on the Martian Surface
NASA Technical Reports Server (NTRS)
Maki, Justin N.; Litwin, Todd; Herkenhoff, Ken
2005-01-01
This slide presentation details the Mars Exploration Rover (MER) imaging system. Over 144,000 images have been gathered from all Mars Missions, with 83.5% of them being gathered by MER. Each Rover has 9 cameras (Navcam, front and rear Hazcam, Pancam, Microscopic Image, Descent Camera, Engineering Camera, Science Camera) and produces 1024 x 1024 (1 Megapixel) images in the same format. All onboard image processing code is implemented in flight software and includes extensive processing capabilities such as autoexposure, flat field correction, image orientation, thumbnail generation, subframing, and image compression. Ground image processing is done at the Jet Propulsion Laboratory's Multimission Image Processing Laboratory using Video Image Communication and Retrieval (VICAR) while stereo processing (left/right pairs) is provided for raw image, radiometric correction; solar energy maps,triangulation (Cartesian 3-spaces) and slope maps.
Digital sun sensor multi-spot operation.
Rufino, Giancarlo; Grassi, Michele
2012-11-28
The operation and test of a multi-spot digital sun sensor for precise sun-line determination is described. The image forming system consists of an opaque mask with multiple pinhole apertures producing multiple, simultaneous, spot-like images of the sun on the focal plane. The sun-line precision can be improved by averaging multiple simultaneous measures. Nevertheless, the sensor operation on a wide field of view requires acquiring and processing images in which the number of sun spots and the related intensity level are largely variable. To this end, a reliable and robust image acquisition procedure based on a variable shutter time has been considered as well as a calibration function exploiting also the knowledge of the sun-spot array size. Main focus of the present paper is the experimental validation of the wide field of view operation of the sensor by using a sensor prototype and a laboratory test facility. Results demonstrate that it is possible to keep high measurement precision also for large off-boresight angles.
NASA Astrophysics Data System (ADS)
Luukanen, A.; Grönberg, L.; Helistö, P.; Penttilä, J. S.; Seppä, H.; Sipola, H.; Dietlein, C. R.; Grossman, E. N.
2006-05-01
The temperature resolving power (NETD) of millimeter wave imagers based on InP HEMT MMIC radiometers is typically about 1 K (30 ms), but the MMIC technology is limited to operating frequencies below ~ 150 GHz. In this paper we report the first results from a pixel developed for an eight pixel sub-array of superconducting antenna-coupled microbolometers, a first step towards a real-time imaging system, with frequency coverage of 0.2 - 3.6 THz. These detectors have demonstrated video-rate NETDs in the millikelvin range, close to the fundamental photon noise limit, when operated at a bath temperature of ~ 4K. The detectors will be operated within a turn-key cryogen-free pulse tube refrigerator, which allows for continuous operation without the need for liquid cryogens. The outstanding frequency agility of bolometric detectors allows for multi-frequency imaging, which greatly enhances the discrimination of e.g. explosives against innoncuous items concealed underneath clothing.
A Graphical Operator Interface for a Telerobotic Inspection System
NASA Technical Reports Server (NTRS)
Kim, W. S.; Tso, K. S.; Hayati, S.
1993-01-01
Operator interface has recently emerged as an important element for efficient and safe operatorinteractions with the telerobotic system. Recent advances in graphical user interface (GUI) andgraphics/video merging technologies enable development of more efficient, flexible operatorinterfaces. This paper describes an advanced graphical operator interface newly developed for aremote surface inspection system at Jet Propulsion Laboratory. The interface has been designed sothat remote surface inspection can be performed by a single operator with an integrated robot controland image inspection capability. It supports three inspection strategies of teleoperated human visual inspection, human visual inspection with automated scanning, and machine-vision-based automated inspection.
Non-Invasive Periodontal Probing Through Fourier-Domain Optical Coherence Tomography.
Mota, Cláudia C B O; Fernandes, Luana O; Cimões, Renata; Gomes, Anderson S L
2015-09-01
Periodontitis is a multifactorial and infectious disease that may result in significant debilitation. The aim of this study is to exploit two optical coherence tomography (OCT) systems operating in the Fourier domain at different wavelengths, 930 and 1,325 nm, for structural analysis of periodontal tissue in porcine jaws. Five fresh porcine jaws were sectioned and stored in formalin before OCT analysis. Two- and three-dimensional OCT images of the tooth/gingiva interface were performed, and measurements of the gingival structures were obtained. The 930-nm OCT system operates in the spectral domain, whereas the 1,325-nm system is a swept-source model. Stereomicroscope images, the gold standard, were used for direct comparison. Through image analysis, it is possible to identify the free gingiva and the attached gingiva, the calculus deposition over tooth surfaces, and the subgingival calculus that enables the enlargement of the gingival sulcus. In addition, the gingival thickness and the gingival sulcus depth can be non-invasively measured, varying from 0.8 to 4 mm. Regarding the ability of the two OCT systems to visualize periodontal structures, the system operating at 1,325 nm shows a better performance, owing to a longer central wavelength that allows deeper tissue penetration. The results with the system at 930 nm can also be used, but some features could not be observed due to its lower penetration depth in the tissue.
High-quality remote interactive imaging in the operating theatre
NASA Astrophysics Data System (ADS)
Grimstead, Ian J.; Avis, Nick J.; Evans, Peter L.; Bocca, Alan
2009-02-01
We present a high-quality display system that enables the remote access within an operating theatre of high-end medical imaging and surgical planning software. Currently, surgeons often use printouts from such software for reference during surgery; our system enables surgeons to access and review patient data in a sterile environment, viewing real-time renderings of MRI & CT data as required. Once calibrated, our system displays shades of grey in Operating Room lighting conditions (removing any gamma correction artefacts). Our system does not require any expensive display hardware, is unobtrusive to the remote workstation and works with any application without requiring additional software licenses. To extend the native 256 levels of grey supported by a standard LCD monitor, we have used the concept of "PseudoGrey" where slightly off-white shades of grey are used to extend the intensity range from 256 to 1,785 shades of grey. Remote access is facilitated by a customized version of UltraVNC, which corrects remote shades of grey for display in the Operating Room. The system is successfully deployed at Morriston Hospital, Swansea, UK, and is in daily use during Maxillofacial surgery. More formal user trials and quantitative assessments are being planned for the future.
Iris Location Algorithm Based on the CANNY Operator and Gradient Hough Transform
NASA Astrophysics Data System (ADS)
Zhong, L. H.; Meng, K.; Wang, Y.; Dai, Z. Q.; Li, S.
2017-12-01
In the iris recognition system, the accuracy of the localization of the inner and outer edges of the iris directly affects the performance of the recognition system, so iris localization has important research meaning. Our iris data contain eyelid, eyelashes, light spot and other noise, even the gray transformation of the images is not obvious, so the general methods of iris location are unable to realize the iris location. The method of the iris location based on Canny operator and gradient Hough transform is proposed. Firstly, the images are pre-processed; then, calculating the gradient information of images, the inner and outer edges of iris are coarse positioned using Canny operator; finally, according to the gradient Hough transform to realize precise localization of the inner and outer edge of iris. The experimental results show that our algorithm can achieve the localization of the inner and outer edges of the iris well, and the algorithm has strong anti-interference ability, can greatly reduce the location time and has higher accuracy and stability.
Correction of a liquid lens for 3D imaging systems
NASA Astrophysics Data System (ADS)
Bower, Andrew J.; Bunch, Robert M.; Leisher, Paul O.; Li, Weixu; Christopher, Lauren A.
2012-06-01
3D imaging systems are currently being developed using liquid lens technology for use in medical devices as well as in consumer electronics. Liquid lenses operate on the principle of electrowetting to control the curvature of a buried surface, allowing for a voltage-controlled change in focal length. Imaging systems which utilize a liquid lens allow extraction of depth information from the object field through a controlled introduction of defocus into the system. The design of such a system must be carefully considered in order to simultaneously deliver good image quality and meet the depth of field requirements for image processing. In this work a corrective model has been designed for use with the Varioptic Arctic 316 liquid lens. The design is able to be optimized for depth of field while minimizing aberrations for a 3D imaging application. The modeled performance is compared to the measured performance of the corrected system over a large range of focal lengths.
Transforming medical imaging applications into collaborative PACS-based telemedical systems
NASA Astrophysics Data System (ADS)
Maani, Rouzbeh; Camorlinga, Sergio; Arnason, Neil
2011-03-01
Telemedical systems are not practical for use in a clinical workflow unless they are able to communicate with the Picture Archiving and Communications System (PACS). On the other hand, there are many medical imaging applications that are not developed as telemedical systems. Some medical imaging applications do not support collaboration and some do not communicate with the PACS and therefore limit their usability in clinical workflows. This paper presents a general architecture based on a three-tier architecture model. The architecture and the components developed within it, transform medical imaging applications into collaborative PACS-based telemedical systems. As a result, current medical imaging applications that are not telemedical, not supporting collaboration, and not communicating with PACS, can be enhanced to support collaboration among a group of physicians, be accessed remotely, and be clinically useful. The main advantage of the proposed architecture is that it does not impose any modification to the current medical imaging applications and does not make any assumptions about the underlying architecture or operating system.
NASA Technical Reports Server (NTRS)
Cardullo, Frank M.; Lewis, Harold W., III; Panfilov, Peter B.
2007-01-01
An extremely innovative approach has been presented, which is to have the surgeon operate through a simulator running in real-time enhanced with an intelligent controller component to enhance the safety and efficiency of a remotely conducted operation. The use of a simulator enables the surgeon to operate in a virtual environment free from the impediments of telecommunication delay. The simulator functions as a predictor and periodically the simulator state is corrected with truth data. Three major research areas must be explored in order to ensure achieving the objectives. They are: simulator as predictor, image processing, and intelligent control. Each is equally necessary for success of the project and each of these involves a significant intelligent component in it. These are diverse, interdisciplinary areas of investigation, thereby requiring a highly coordinated effort by all the members of our team, to ensure an integrated system. The following is a brief discussion of those areas. Simulator as a predictor: The delays encountered in remote robotic surgery will be greater than any encountered in human-machine systems analysis, with the possible exception of remote operations in space. Therefore, novel compensation techniques will be developed. Included will be the development of the real-time simulator, which is at the heart of our approach. The simulator will present real-time, stereoscopic images and artificial haptic stimuli to the surgeon. Image processing: Because of the delay and the possibility of insufficient bandwidth a high level of novel image processing is necessary. This image processing will include several innovative aspects, including image interpretation, video to graphical conversion, texture extraction, geometric processing, image compression and image generation at the surgeon station. Intelligent control: Since the approach we propose is in a sense predictor based, albeit a very sophisticated predictor, a controller, which not only optimizes end effector trajectory but also avoids error, is essential. We propose to investigate two different approaches to the controller design. One approach employs an optimal controller based on modern control theory; the other one involves soft computing techniques, i.e. fuzzy logic, neural networks, genetic algorithms and hybrids of these.
Miniaturized Airborne Imaging Central Server System
NASA Technical Reports Server (NTRS)
Sun, Xiuhong
2011-01-01
In recent years, some remote-sensing applications require advanced airborne multi-sensor systems to provide high performance reflective and emissive spectral imaging measurement rapidly over large areas. The key or unique problem of characteristics is associated with a black box back-end system that operates a suite of cutting-edge imaging sensors to collect simultaneously the high throughput reflective and emissive spectral imaging data with precision georeference. This back-end system needs to be portable, easy-to-use, and reliable with advanced onboard processing. The innovation of the black box backend is a miniaturized airborne imaging central server system (MAICSS). MAICSS integrates a complex embedded system of systems with dedicated power and signal electronic circuits inside to serve a suite of configurable cutting-edge electro- optical (EO), long-wave infrared (LWIR), and medium-wave infrared (MWIR) cameras, a hyperspectral imaging scanner, and a GPS and inertial measurement unit (IMU) for atmospheric and surface remote sensing. Its compatible sensor packages include NASA s 1,024 1,024 pixel LWIR quantum well infrared photodetector (QWIP) imager; a 60.5 megapixel BuckEye EO camera; and a fast (e.g. 200+ scanlines/s) and wide swath-width (e.g., 1,920+ pixels) CCD/InGaAs imager-based visible/near infrared reflectance (VNIR) and shortwave infrared (SWIR) imaging spectrometer. MAICSS records continuous precision georeferenced and time-tagged multisensor throughputs to mass storage devices at a high aggregate rate, typically 60 MB/s for its LWIR/EO payload. MAICSS is a complete stand-alone imaging server instrument with an easy-to-use software package for either autonomous data collection or interactive airborne operation. Advanced multisensor data acquisition and onboard processing software features have been implemented for MAICSS. With the onboard processing for real time image development, correction, histogram-equalization, compression, georeference, and data organization, fast aerial imaging applications, including the real time LWIR image mosaic for Google Earth, have been realized for NASA fs LWIR QWIP instrument. MAICSS is a significant improvement and miniaturization of current multisensor technologies. Structurally, it has a complete modular and solid-state design. Without rotating hard drives and other moving parts, it is operational at high altitudes and survivable in high-vibration environments. It is assembled from a suite of miniaturized, precision-machined, standardized, and stackable interchangeable embedded instrument modules. These stackable modules can be bolted together with the interconnection wires inside for the maximal simplicity and portability. Multiple modules are electronically interconnected as stacked. Alternatively, these dedicated modules can be flexibly distributed to fit the space constraints of a flying vehicle. As a flexibly configurable system, MAICSS can be tailored to interface a variety of multisensor packages. For example, with a 1,024x1,024 pixel LWIR and a 8,984x6,732 pixel EO payload, the complete MAICSS volume is approximately 7x9x11 in. (=18x23x28 cm), with a weight of 25 lb (=11.4 kg).
An automatic panoramic image reconstruction scheme from dental computed tomography images
Papakosta, Thekla K; Savva, Antonis D; Economopoulos, Theodore L; Gröhndal, H G
2017-01-01
Objectives: Panoramic images of the jaws are extensively used for dental examinations and/or surgical planning because they provide a general overview of the patient's maxillary and mandibular regions. Panoramic images are two-dimensional projections of three-dimensional (3D) objects. Therefore, it should be possible to reconstruct them from 3D radiographic representations of the jaws, produced by CBCT scanning, obviating the need for additional exposure to X-rays, should there be a need of panoramic views. The aim of this article is to present an automated method for reconstructing panoramic dental images from CBCT data. Methods: The proposed methodology consists of a series of sequential processing stages for detecting a fitting dental arch which is used for projecting the 3D information of the CBCT data to the two-dimensional plane of the panoramic image. The detection is based on a template polynomial which is constructed from a training data set. Results: A total of 42 CBCT data sets of real clinical pre-operative and post-operative representations from 21 patients were used. Eight data sets were used for training the system and the rest for testing. Conclusions: The proposed methodology was successfully applied to CBCT data sets, producing corresponding panoramic images, suitable for examining pre-operatively and post-operatively the patients' maxillary and mandibular regions. PMID:28112548
NASA Astrophysics Data System (ADS)
Roggemann, M.; Soehnel, G.; Archer, G.
Atmospheric turbulence degrades the resolution of images of space objects far beyond that predicted by diffraction alone. Adaptive optics telescopes have been widely used for compensating these effects, but as users seek to extend the envelopes of operation of adaptive optics telescopes to more demanding conditions, such as daylight operation, and operation at low elevation angles, the level of compensation provided will degrade. We have been investigating the use of advanced wave front reconstructors and post detection image reconstruction to overcome the effects of turbulence on imaging systems in these more demanding scenarios. In this paper we show results comparing the optical performance of the exponential reconstructor, the least squares reconstructor, and two versions of a reconstructor based on the stochastic parallel gradient descent algorithm in a closed loop adaptive optics system using a conventional continuous facesheet deformable mirror and a Hartmann sensor. The performance of these reconstructors has been evaluated under a range of source visual magnitudes and zenith angles ranging up to 70 degrees. We have also simulated satellite images, and applied speckle imaging, multi-frame blind deconvolution algorithms, and deconvolution algorithms that presume the average point spread function is known to compute object estimates. Our work thus far indicates that the combination of adaptive optics and post detection image processing will extend the useful envelope of the current generation of adaptive optics telescopes.